Test Report: KVM_Linux_crio 19546

                    
                      9c905d7ddc6fcb24a41b70e16c9a4a5dd3740602:2024-10-04:36493
                    
                

Test fail (34/267)

Order failed test Duration
32 TestAddons/serial/GCPAuth/PullSecret 480.61
35 TestAddons/parallel/Ingress 155.71
38 TestAddons/parallel/MetricsServer 294.43
46 TestAddons/StoppedEnableDisable 154.42
165 TestMultiControlPlane/serial/StopSecondaryNode 141.76
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.71
167 TestMultiControlPlane/serial/RestartSecondaryNode 6.59
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.29
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 363.97
172 TestMultiControlPlane/serial/StopCluster 141.83
232 TestMultiNode/serial/RestartKeepsNodes 330.9
234 TestMultiNode/serial/StopMultiNode 145.44
241 TestPreload 271.79
249 TestKubernetesUpgrade 391.47
273 TestPause/serial/SecondStartNoReconfiguration 65.76
281 TestStartStop/group/old-k8s-version/serial/FirstStart 289.73
287 TestNoKubernetes/serial/StartNoArgs 61.57
295 TestStartStop/group/no-preload/serial/Stop 139.06
298 TestStartStop/group/embed-certs/serial/Stop 139.14
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
314 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.16
315 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
316 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 110.7
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
321 TestStartStop/group/old-k8s-version/serial/SecondStart 676.99
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.21
325 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.07
326 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.09
327 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.43
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 433.05
329 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 543.17
330 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 344.61
331 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 178.61
x
+
TestAddons/serial/GCPAuth/PullSecret (480.61s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:615: (dbg) Run:  kubectl --context addons-335265 create -f testdata/busybox.yaml
addons_test.go:622: (dbg) Run:  kubectl --context addons-335265 create sa gcp-auth-test
addons_test.go:628: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ea289386-a580-4a9e-ba94-c28adf57b2a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/serial/GCPAuth/PullSecret: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:628: ***** TestAddons/serial/GCPAuth/PullSecret: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:628: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-335265 -n addons-335265
addons_test.go:628: TestAddons/serial/GCPAuth/PullSecret: showing logs for failed pods as of 2024-10-04 03:00:09.256983113 +0000 UTC m=+728.189923672
addons_test.go:628: (dbg) Run:  kubectl --context addons-335265 describe po busybox -n default
addons_test.go:628: (dbg) kubectl --context addons-335265 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-335265/192.168.39.175
Start Time:       Fri, 04 Oct 2024 02:52:08 +0000
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               10.244.0.23
IPs:
IP:  10.244.0.23
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-clgqq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-clgqq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  8m1s                 default-scheduler  Successfully assigned default/busybox to addons-335265
Normal   Pulling    6m32s (x4 over 8m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning  Failed     6m32s (x4 over 8m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning  Failed     6m32s (x4 over 8m)   kubelet            Error: ErrImagePull
Warning  Failed     6m18s (x6 over 8m)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m55s (x20 over 8m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
addons_test.go:628: (dbg) Run:  kubectl --context addons-335265 logs busybox -n default
addons_test.go:628: (dbg) Non-zero exit: kubectl --context addons-335265 logs busybox -n default: exit status 1 (71.830234ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:628: kubectl --context addons-335265 logs busybox -n default: exit status 1
addons_test.go:630: wait: integration-test=busybox within 8m0s: context deadline exceeded
--- FAIL: TestAddons/serial/GCPAuth/PullSecret (480.61s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (155.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:208: (dbg) Run:  kubectl --context addons-335265 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:233: (dbg) Run:  kubectl --context addons-335265 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:246: (dbg) Run:  kubectl --context addons-335265 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:251: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d3df1714-d414-4b36-9919-09dcd9c98407] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d3df1714-d414-4b36-9919-09dcd9c98407] Running
addons_test.go:251: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004476416s
I1004 03:01:08.826206   16879 kapi.go:150] Service nginx in namespace default found.
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-335265 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.747434928s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:279: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:287: (dbg) Run:  kubectl --context addons-335265 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:292: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 ip
addons_test.go:298: (dbg) Run:  nslookup hello-john.test 192.168.39.175
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-335265 -n addons-335265
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-335265 logs -n 25: (1.267961757s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-583140                                                                     | download-only-583140 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| delete  | -p download-only-920812                                                                     | download-only-920812 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| delete  | -p download-only-583140                                                                     | download-only-583140 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-774332 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | binary-mirror-774332                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34587                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-774332                                                                     | binary-mirror-774332 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| addons  | disable dashboard -p                                                                        | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | addons-335265                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | addons-335265                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-335265 --wait=true                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:52 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=logviewer                                                                          |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 02:52 UTC | 04 Oct 24 02:52 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-335265 ssh cat                                                                       | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | /opt/local-path-provisioner/pvc-14e1b505-7a2b-48a9-8f30-4f0b19662b44_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-335265 ip                                                                            | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | logviewer --alsologtostderr                                                                 |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-335265 addons                                                                        | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-335265 ssh curl -s                                                                   | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-335265 addons                                                                        | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-335265 addons                                                                        | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-335265 addons                                                                        | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | -p addons-335265                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | -p addons-335265                                                                            |                      |         |         |                     |                     |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-335265 ip                                                                            | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:03 UTC | 04 Oct 24 03:03 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 02:48:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:48:42.350397   17586 out.go:345] Setting OutFile to fd 1 ...
	I1004 02:48:42.350509   17586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:48:42.350518   17586 out.go:358] Setting ErrFile to fd 2...
	I1004 02:48:42.350523   17586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:48:42.350678   17586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 02:48:42.351312   17586 out.go:352] Setting JSON to false
	I1004 02:48:42.352109   17586 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1867,"bootTime":1728008255,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 02:48:42.352200   17586 start.go:139] virtualization: kvm guest
	I1004 02:48:42.354280   17586 out.go:177] * [addons-335265] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 02:48:42.355686   17586 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 02:48:42.355690   17586 notify.go:220] Checking for updates...
	I1004 02:48:42.356993   17586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:48:42.358275   17586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 02:48:42.359475   17586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 02:48:42.360643   17586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 02:48:42.361726   17586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 02:48:42.363162   17586 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 02:48:42.396244   17586 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 02:48:42.397409   17586 start.go:297] selected driver: kvm2
	I1004 02:48:42.397422   17586 start.go:901] validating driver "kvm2" against <nil>
	I1004 02:48:42.397433   17586 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 02:48:42.398134   17586 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:48:42.398219   17586 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 02:48:42.413943   17586 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 02:48:42.413998   17586 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 02:48:42.414283   17586 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 02:48:42.414315   17586 cni.go:84] Creating CNI manager for ""
	I1004 02:48:42.414372   17586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:48:42.414386   17586 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 02:48:42.414458   17586 start.go:340] cluster config:
	{Name:addons-335265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-335265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:48:42.414603   17586 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:48:42.416533   17586 out.go:177] * Starting "addons-335265" primary control-plane node in "addons-335265" cluster
	I1004 02:48:42.417803   17586 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:48:42.417858   17586 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 02:48:42.417884   17586 cache.go:56] Caching tarball of preloaded images
	I1004 02:48:42.417982   17586 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 02:48:42.417994   17586 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 02:48:42.418317   17586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/config.json ...
	I1004 02:48:42.418344   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/config.json: {Name:mkd46b476c8343679536647b0d03e29a5f854756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:42.418499   17586 start.go:360] acquireMachinesLock for addons-335265: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 02:48:42.418559   17586 start.go:364] duration metric: took 45.184µs to acquireMachinesLock for "addons-335265"
	I1004 02:48:42.418583   17586 start.go:93] Provisioning new machine with config: &{Name:addons-335265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-335265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:48:42.418655   17586 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 02:48:42.420283   17586 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1004 02:48:42.420438   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:48:42.420479   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:48:42.435142   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33693
	I1004 02:48:42.435633   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:48:42.436190   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:48:42.436214   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:48:42.436553   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:48:42.436738   17586 main.go:141] libmachine: (addons-335265) Calling .GetMachineName
	I1004 02:48:42.436869   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:48:42.437005   17586 start.go:159] libmachine.API.Create for "addons-335265" (driver="kvm2")
	I1004 02:48:42.437034   17586 client.go:168] LocalClient.Create starting
	I1004 02:48:42.437077   17586 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 02:48:42.684563   17586 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 02:48:42.910608   17586 main.go:141] libmachine: Running pre-create checks...
	I1004 02:48:42.910635   17586 main.go:141] libmachine: (addons-335265) Calling .PreCreateCheck
	I1004 02:48:42.911169   17586 main.go:141] libmachine: (addons-335265) Calling .GetConfigRaw
	I1004 02:48:42.911608   17586 main.go:141] libmachine: Creating machine...
	I1004 02:48:42.911622   17586 main.go:141] libmachine: (addons-335265) Calling .Create
	I1004 02:48:42.911773   17586 main.go:141] libmachine: (addons-335265) Creating KVM machine...
	I1004 02:48:42.912946   17586 main.go:141] libmachine: (addons-335265) DBG | found existing default KVM network
	I1004 02:48:42.913626   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:42.913481   17608 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I1004 02:48:42.913645   17586 main.go:141] libmachine: (addons-335265) DBG | created network xml: 
	I1004 02:48:42.913652   17586 main.go:141] libmachine: (addons-335265) DBG | <network>
	I1004 02:48:42.913658   17586 main.go:141] libmachine: (addons-335265) DBG |   <name>mk-addons-335265</name>
	I1004 02:48:42.913665   17586 main.go:141] libmachine: (addons-335265) DBG |   <dns enable='no'/>
	I1004 02:48:42.913670   17586 main.go:141] libmachine: (addons-335265) DBG |   
	I1004 02:48:42.913676   17586 main.go:141] libmachine: (addons-335265) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1004 02:48:42.913681   17586 main.go:141] libmachine: (addons-335265) DBG |     <dhcp>
	I1004 02:48:42.913687   17586 main.go:141] libmachine: (addons-335265) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1004 02:48:42.913693   17586 main.go:141] libmachine: (addons-335265) DBG |     </dhcp>
	I1004 02:48:42.913699   17586 main.go:141] libmachine: (addons-335265) DBG |   </ip>
	I1004 02:48:42.913704   17586 main.go:141] libmachine: (addons-335265) DBG |   
	I1004 02:48:42.913709   17586 main.go:141] libmachine: (addons-335265) DBG | </network>
	I1004 02:48:42.913717   17586 main.go:141] libmachine: (addons-335265) DBG | 
	I1004 02:48:42.919395   17586 main.go:141] libmachine: (addons-335265) DBG | trying to create private KVM network mk-addons-335265 192.168.39.0/24...
	I1004 02:48:42.986986   17586 main.go:141] libmachine: (addons-335265) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265 ...
	I1004 02:48:42.987017   17586 main.go:141] libmachine: (addons-335265) DBG | private KVM network mk-addons-335265 192.168.39.0/24 created
	I1004 02:48:42.987049   17586 main.go:141] libmachine: (addons-335265) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 02:48:42.987063   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:42.986925   17608 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 02:48:42.987085   17586 main.go:141] libmachine: (addons-335265) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 02:48:43.256261   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:43.256128   17608 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa...
	I1004 02:48:43.498782   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:43.498636   17608 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/addons-335265.rawdisk...
	I1004 02:48:43.498813   17586 main.go:141] libmachine: (addons-335265) DBG | Writing magic tar header
	I1004 02:48:43.498823   17586 main.go:141] libmachine: (addons-335265) DBG | Writing SSH key tar header
	I1004 02:48:43.498834   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:43.498759   17608 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265 ...
	I1004 02:48:43.498851   17586 main.go:141] libmachine: (addons-335265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265
	I1004 02:48:43.498906   17586 main.go:141] libmachine: (addons-335265) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265 (perms=drwx------)
	I1004 02:48:43.498928   17586 main.go:141] libmachine: (addons-335265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 02:48:43.498939   17586 main.go:141] libmachine: (addons-335265) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 02:48:43.498948   17586 main.go:141] libmachine: (addons-335265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 02:48:43.498961   17586 main.go:141] libmachine: (addons-335265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 02:48:43.498967   17586 main.go:141] libmachine: (addons-335265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 02:48:43.498972   17586 main.go:141] libmachine: (addons-335265) DBG | Checking permissions on dir: /home/jenkins
	I1004 02:48:43.498977   17586 main.go:141] libmachine: (addons-335265) DBG | Checking permissions on dir: /home
	I1004 02:48:43.498986   17586 main.go:141] libmachine: (addons-335265) DBG | Skipping /home - not owner
	I1004 02:48:43.498997   17586 main.go:141] libmachine: (addons-335265) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 02:48:43.499010   17586 main.go:141] libmachine: (addons-335265) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 02:48:43.499033   17586 main.go:141] libmachine: (addons-335265) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 02:48:43.499048   17586 main.go:141] libmachine: (addons-335265) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 02:48:43.499053   17586 main.go:141] libmachine: (addons-335265) Creating domain...
	I1004 02:48:43.500076   17586 main.go:141] libmachine: (addons-335265) define libvirt domain using xml: 
	I1004 02:48:43.500107   17586 main.go:141] libmachine: (addons-335265) <domain type='kvm'>
	I1004 02:48:43.500156   17586 main.go:141] libmachine: (addons-335265)   <name>addons-335265</name>
	I1004 02:48:43.500184   17586 main.go:141] libmachine: (addons-335265)   <memory unit='MiB'>4000</memory>
	I1004 02:48:43.500193   17586 main.go:141] libmachine: (addons-335265)   <vcpu>2</vcpu>
	I1004 02:48:43.500203   17586 main.go:141] libmachine: (addons-335265)   <features>
	I1004 02:48:43.500235   17586 main.go:141] libmachine: (addons-335265)     <acpi/>
	I1004 02:48:43.500252   17586 main.go:141] libmachine: (addons-335265)     <apic/>
	I1004 02:48:43.500263   17586 main.go:141] libmachine: (addons-335265)     <pae/>
	I1004 02:48:43.500267   17586 main.go:141] libmachine: (addons-335265)     
	I1004 02:48:43.500275   17586 main.go:141] libmachine: (addons-335265)   </features>
	I1004 02:48:43.500279   17586 main.go:141] libmachine: (addons-335265)   <cpu mode='host-passthrough'>
	I1004 02:48:43.500289   17586 main.go:141] libmachine: (addons-335265)   
	I1004 02:48:43.500301   17586 main.go:141] libmachine: (addons-335265)   </cpu>
	I1004 02:48:43.500308   17586 main.go:141] libmachine: (addons-335265)   <os>
	I1004 02:48:43.500312   17586 main.go:141] libmachine: (addons-335265)     <type>hvm</type>
	I1004 02:48:43.500317   17586 main.go:141] libmachine: (addons-335265)     <boot dev='cdrom'/>
	I1004 02:48:43.500324   17586 main.go:141] libmachine: (addons-335265)     <boot dev='hd'/>
	I1004 02:48:43.500329   17586 main.go:141] libmachine: (addons-335265)     <bootmenu enable='no'/>
	I1004 02:48:43.500332   17586 main.go:141] libmachine: (addons-335265)   </os>
	I1004 02:48:43.500338   17586 main.go:141] libmachine: (addons-335265)   <devices>
	I1004 02:48:43.500345   17586 main.go:141] libmachine: (addons-335265)     <disk type='file' device='cdrom'>
	I1004 02:48:43.500353   17586 main.go:141] libmachine: (addons-335265)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/boot2docker.iso'/>
	I1004 02:48:43.500360   17586 main.go:141] libmachine: (addons-335265)       <target dev='hdc' bus='scsi'/>
	I1004 02:48:43.500365   17586 main.go:141] libmachine: (addons-335265)       <readonly/>
	I1004 02:48:43.500372   17586 main.go:141] libmachine: (addons-335265)     </disk>
	I1004 02:48:43.500378   17586 main.go:141] libmachine: (addons-335265)     <disk type='file' device='disk'>
	I1004 02:48:43.500385   17586 main.go:141] libmachine: (addons-335265)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 02:48:43.500393   17586 main.go:141] libmachine: (addons-335265)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/addons-335265.rawdisk'/>
	I1004 02:48:43.500400   17586 main.go:141] libmachine: (addons-335265)       <target dev='hda' bus='virtio'/>
	I1004 02:48:43.500405   17586 main.go:141] libmachine: (addons-335265)     </disk>
	I1004 02:48:43.500409   17586 main.go:141] libmachine: (addons-335265)     <interface type='network'>
	I1004 02:48:43.500417   17586 main.go:141] libmachine: (addons-335265)       <source network='mk-addons-335265'/>
	I1004 02:48:43.500421   17586 main.go:141] libmachine: (addons-335265)       <model type='virtio'/>
	I1004 02:48:43.500426   17586 main.go:141] libmachine: (addons-335265)     </interface>
	I1004 02:48:43.500433   17586 main.go:141] libmachine: (addons-335265)     <interface type='network'>
	I1004 02:48:43.500438   17586 main.go:141] libmachine: (addons-335265)       <source network='default'/>
	I1004 02:48:43.500444   17586 main.go:141] libmachine: (addons-335265)       <model type='virtio'/>
	I1004 02:48:43.500449   17586 main.go:141] libmachine: (addons-335265)     </interface>
	I1004 02:48:43.500454   17586 main.go:141] libmachine: (addons-335265)     <serial type='pty'>
	I1004 02:48:43.500459   17586 main.go:141] libmachine: (addons-335265)       <target port='0'/>
	I1004 02:48:43.500463   17586 main.go:141] libmachine: (addons-335265)     </serial>
	I1004 02:48:43.500471   17586 main.go:141] libmachine: (addons-335265)     <console type='pty'>
	I1004 02:48:43.500481   17586 main.go:141] libmachine: (addons-335265)       <target type='serial' port='0'/>
	I1004 02:48:43.500486   17586 main.go:141] libmachine: (addons-335265)     </console>
	I1004 02:48:43.500492   17586 main.go:141] libmachine: (addons-335265)     <rng model='virtio'>
	I1004 02:48:43.500497   17586 main.go:141] libmachine: (addons-335265)       <backend model='random'>/dev/random</backend>
	I1004 02:48:43.500501   17586 main.go:141] libmachine: (addons-335265)     </rng>
	I1004 02:48:43.500508   17586 main.go:141] libmachine: (addons-335265)     
	I1004 02:48:43.500512   17586 main.go:141] libmachine: (addons-335265)     
	I1004 02:48:43.500517   17586 main.go:141] libmachine: (addons-335265)   </devices>
	I1004 02:48:43.500523   17586 main.go:141] libmachine: (addons-335265) </domain>
	I1004 02:48:43.500529   17586 main.go:141] libmachine: (addons-335265) 
	I1004 02:48:43.506147   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:e4:2e:9f in network default
	I1004 02:48:43.506594   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:43.506612   17586 main.go:141] libmachine: (addons-335265) Ensuring networks are active...
	I1004 02:48:43.507251   17586 main.go:141] libmachine: (addons-335265) Ensuring network default is active
	I1004 02:48:43.507517   17586 main.go:141] libmachine: (addons-335265) Ensuring network mk-addons-335265 is active
	I1004 02:48:43.508000   17586 main.go:141] libmachine: (addons-335265) Getting domain xml...
	I1004 02:48:43.508579   17586 main.go:141] libmachine: (addons-335265) Creating domain...
	I1004 02:48:44.907669   17586 main.go:141] libmachine: (addons-335265) Waiting to get IP...
	I1004 02:48:44.908672   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:44.909073   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:44.909127   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:44.909073   17608 retry.go:31] will retry after 280.008027ms: waiting for machine to come up
	I1004 02:48:45.190666   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:45.191125   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:45.191152   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:45.191075   17608 retry.go:31] will retry after 243.041026ms: waiting for machine to come up
	I1004 02:48:45.435512   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:45.435972   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:45.435998   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:45.435925   17608 retry.go:31] will retry after 422.640633ms: waiting for machine to come up
	I1004 02:48:45.860583   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:45.861101   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:45.861122   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:45.861056   17608 retry.go:31] will retry after 564.471931ms: waiting for machine to come up
	I1004 02:48:46.426875   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:46.427358   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:46.427395   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:46.427309   17608 retry.go:31] will retry after 530.666332ms: waiting for machine to come up
	I1004 02:48:46.960292   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:46.960759   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:46.960789   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:46.960715   17608 retry.go:31] will retry after 764.969096ms: waiting for machine to come up
	I1004 02:48:47.727333   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:47.727828   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:47.727855   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:47.727793   17608 retry.go:31] will retry after 1.186987659s: waiting for machine to come up
	I1004 02:48:48.916278   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:48.916768   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:48.916796   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:48.916723   17608 retry.go:31] will retry after 1.406687575s: waiting for machine to come up
	I1004 02:48:50.325402   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:50.325831   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:50.325860   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:50.325784   17608 retry.go:31] will retry after 1.690401875s: waiting for machine to come up
	I1004 02:48:52.018537   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:52.019077   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:52.019095   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:52.019046   17608 retry.go:31] will retry after 1.543506793s: waiting for machine to come up
	I1004 02:48:53.563909   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:53.564444   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:53.564502   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:53.564420   17608 retry.go:31] will retry after 2.533992227s: waiting for machine to come up
	I1004 02:48:56.100836   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:56.101280   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:56.101303   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:56.101236   17608 retry.go:31] will retry after 2.289001665s: waiting for machine to come up
	I1004 02:48:58.392193   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:58.392572   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:58.392593   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:58.392551   17608 retry.go:31] will retry after 3.362876269s: waiting for machine to come up
	I1004 02:49:01.757665   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:01.758055   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:49:01.758076   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:49:01.758015   17608 retry.go:31] will retry after 5.109433719s: waiting for machine to come up
	I1004 02:49:06.872014   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:06.872375   17586 main.go:141] libmachine: (addons-335265) Found IP for machine: 192.168.39.175
	I1004 02:49:06.872400   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has current primary IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:06.872407   17586 main.go:141] libmachine: (addons-335265) Reserving static IP address...
	I1004 02:49:06.872838   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find host DHCP lease matching {name: "addons-335265", mac: "52:54:00:ce:42:f3", ip: "192.168.39.175"} in network mk-addons-335265
	I1004 02:49:06.945882   17586 main.go:141] libmachine: (addons-335265) DBG | Getting to WaitForSSH function...
	I1004 02:49:06.945919   17586 main.go:141] libmachine: (addons-335265) Reserved static IP address: 192.168.39.175
	I1004 02:49:06.945934   17586 main.go:141] libmachine: (addons-335265) Waiting for SSH to be available...
	I1004 02:49:06.948082   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:06.948305   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265
	I1004 02:49:06.948333   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find defined IP address of network mk-addons-335265 interface with MAC address 52:54:00:ce:42:f3
	I1004 02:49:06.948447   17586 main.go:141] libmachine: (addons-335265) DBG | Using SSH client type: external
	I1004 02:49:06.948475   17586 main.go:141] libmachine: (addons-335265) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa (-rw-------)
	I1004 02:49:06.948510   17586 main.go:141] libmachine: (addons-335265) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 02:49:06.948536   17586 main.go:141] libmachine: (addons-335265) DBG | About to run SSH command:
	I1004 02:49:06.948583   17586 main.go:141] libmachine: (addons-335265) DBG | exit 0
	I1004 02:49:06.958792   17586 main.go:141] libmachine: (addons-335265) DBG | SSH cmd err, output: exit status 255: 
	I1004 02:49:06.958815   17586 main.go:141] libmachine: (addons-335265) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1004 02:49:06.958822   17586 main.go:141] libmachine: (addons-335265) DBG | command : exit 0
	I1004 02:49:06.958827   17586 main.go:141] libmachine: (addons-335265) DBG | err     : exit status 255
	I1004 02:49:06.958834   17586 main.go:141] libmachine: (addons-335265) DBG | output  : 
	I1004 02:49:09.960625   17586 main.go:141] libmachine: (addons-335265) DBG | Getting to WaitForSSH function...
	I1004 02:49:09.963056   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:09.963378   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:09.963405   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:09.963542   17586 main.go:141] libmachine: (addons-335265) DBG | Using SSH client type: external
	I1004 02:49:09.963555   17586 main.go:141] libmachine: (addons-335265) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa (-rw-------)
	I1004 02:49:09.963580   17586 main.go:141] libmachine: (addons-335265) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 02:49:09.963598   17586 main.go:141] libmachine: (addons-335265) DBG | About to run SSH command:
	I1004 02:49:09.963612   17586 main.go:141] libmachine: (addons-335265) DBG | exit 0
	I1004 02:49:10.092290   17586 main.go:141] libmachine: (addons-335265) DBG | SSH cmd err, output: <nil>: 
	I1004 02:49:10.092561   17586 main.go:141] libmachine: (addons-335265) KVM machine creation complete!
	I1004 02:49:10.092892   17586 main.go:141] libmachine: (addons-335265) Calling .GetConfigRaw
	I1004 02:49:10.093446   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:10.093675   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:10.093857   17586 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 02:49:10.093871   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:10.095479   17586 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 02:49:10.095495   17586 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 02:49:10.095502   17586 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 02:49:10.095510   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:10.097826   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.098154   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:10.098188   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.098331   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:10.098549   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.098690   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.098824   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:10.099115   17586 main.go:141] libmachine: Using SSH client type: native
	I1004 02:49:10.099300   17586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1004 02:49:10.099315   17586 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 02:49:10.207072   17586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:49:10.207098   17586 main.go:141] libmachine: Detecting the provisioner...
	I1004 02:49:10.207109   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:10.209769   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.210218   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:10.210240   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.210452   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:10.210710   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.210900   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.211131   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:10.211354   17586 main.go:141] libmachine: Using SSH client type: native
	I1004 02:49:10.211542   17586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1004 02:49:10.211556   17586 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 02:49:10.320576   17586 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 02:49:10.320665   17586 main.go:141] libmachine: found compatible host: buildroot
	I1004 02:49:10.320675   17586 main.go:141] libmachine: Provisioning with buildroot...
	I1004 02:49:10.320682   17586 main.go:141] libmachine: (addons-335265) Calling .GetMachineName
	I1004 02:49:10.320922   17586 buildroot.go:166] provisioning hostname "addons-335265"
	I1004 02:49:10.320941   17586 main.go:141] libmachine: (addons-335265) Calling .GetMachineName
	I1004 02:49:10.321085   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:10.323697   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.324018   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:10.324041   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.324264   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:10.324467   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.324678   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.324791   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:10.324947   17586 main.go:141] libmachine: Using SSH client type: native
	I1004 02:49:10.325104   17586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1004 02:49:10.325115   17586 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-335265 && echo "addons-335265" | sudo tee /etc/hostname
	I1004 02:49:10.450449   17586 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-335265
	
	I1004 02:49:10.450482   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:10.453337   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.453670   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:10.453698   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.453862   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:10.454033   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.454178   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.454281   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:10.454565   17586 main.go:141] libmachine: Using SSH client type: native
	I1004 02:49:10.454749   17586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1004 02:49:10.454771   17586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-335265' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-335265/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-335265' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 02:49:10.572646   17586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:49:10.572678   17586 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 02:49:10.572716   17586 buildroot.go:174] setting up certificates
	I1004 02:49:10.572728   17586 provision.go:84] configureAuth start
	I1004 02:49:10.572737   17586 main.go:141] libmachine: (addons-335265) Calling .GetMachineName
	I1004 02:49:10.572974   17586 main.go:141] libmachine: (addons-335265) Calling .GetIP
	I1004 02:49:10.576042   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.576425   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:10.576456   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.576557   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:10.578465   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.578770   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:10.578800   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.578937   17586 provision.go:143] copyHostCerts
	I1004 02:49:10.579011   17586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 02:49:10.579140   17586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 02:49:10.579215   17586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 02:49:10.579278   17586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.addons-335265 san=[127.0.0.1 192.168.39.175 addons-335265 localhost minikube]
	I1004 02:49:10.877231   17586 provision.go:177] copyRemoteCerts
	I1004 02:49:10.877293   17586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 02:49:10.877318   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:10.880092   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.880505   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:10.880533   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.880781   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:10.880973   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.881136   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:10.881277   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:10.966818   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 02:49:10.993346   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 02:49:11.018096   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 02:49:11.043905   17586 provision.go:87] duration metric: took 471.164406ms to configureAuth
	I1004 02:49:11.043940   17586 buildroot.go:189] setting minikube options for container-runtime
	I1004 02:49:11.044149   17586 config.go:182] Loaded profile config "addons-335265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 02:49:11.044233   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:11.046930   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.047265   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.047292   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.047424   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:11.047609   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:11.047765   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:11.047895   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:11.048041   17586 main.go:141] libmachine: Using SSH client type: native
	I1004 02:49:11.048218   17586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1004 02:49:11.048238   17586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 02:49:11.294849   17586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 02:49:11.294884   17586 main.go:141] libmachine: Checking connection to Docker...
	I1004 02:49:11.294895   17586 main.go:141] libmachine: (addons-335265) Calling .GetURL
	I1004 02:49:11.296425   17586 main.go:141] libmachine: (addons-335265) DBG | Using libvirt version 6000000
	I1004 02:49:11.298760   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.299055   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.299085   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.299271   17586 main.go:141] libmachine: Docker is up and running!
	I1004 02:49:11.299287   17586 main.go:141] libmachine: Reticulating splines...
	I1004 02:49:11.299297   17586 client.go:171] duration metric: took 28.86225455s to LocalClient.Create
	I1004 02:49:11.299323   17586 start.go:167] duration metric: took 28.862319682s to libmachine.API.Create "addons-335265"
	I1004 02:49:11.299337   17586 start.go:293] postStartSetup for "addons-335265" (driver="kvm2")
	I1004 02:49:11.299352   17586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 02:49:11.299373   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:11.299598   17586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 02:49:11.299620   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:11.301489   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.301799   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.301822   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.302037   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:11.302209   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:11.302372   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:11.302491   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:11.390911   17586 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 02:49:11.395868   17586 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 02:49:11.395891   17586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 02:49:11.395962   17586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 02:49:11.395987   17586 start.go:296] duration metric: took 96.641368ms for postStartSetup
	I1004 02:49:11.396016   17586 main.go:141] libmachine: (addons-335265) Calling .GetConfigRaw
	I1004 02:49:11.396583   17586 main.go:141] libmachine: (addons-335265) Calling .GetIP
	I1004 02:49:11.399152   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.399521   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.399544   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.399771   17586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/config.json ...
	I1004 02:49:11.399985   17586 start.go:128] duration metric: took 28.981318746s to createHost
	I1004 02:49:11.400011   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:11.402487   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.402761   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.402787   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.402955   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:11.403111   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:11.403269   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:11.403500   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:11.403682   17586 main.go:141] libmachine: Using SSH client type: native
	I1004 02:49:11.403897   17586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1004 02:49:11.403913   17586 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 02:49:11.516789   17586 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728010151.472017341
	
	I1004 02:49:11.516825   17586 fix.go:216] guest clock: 1728010151.472017341
	I1004 02:49:11.516839   17586 fix.go:229] Guest: 2024-10-04 02:49:11.472017341 +0000 UTC Remote: 2024-10-04 02:49:11.399997501 +0000 UTC m=+29.083341978 (delta=72.01984ms)
	I1004 02:49:11.516902   17586 fix.go:200] guest clock delta is within tolerance: 72.01984ms
	I1004 02:49:11.516911   17586 start.go:83] releasing machines lock for "addons-335265", held for 29.098338654s
	I1004 02:49:11.516940   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:11.517173   17586 main.go:141] libmachine: (addons-335265) Calling .GetIP
	I1004 02:49:11.519751   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.520075   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.520098   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.520295   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:11.520918   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:11.521064   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:11.521171   17586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 02:49:11.521211   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:11.521344   17586 ssh_runner.go:195] Run: cat /version.json
	I1004 02:49:11.521370   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:11.523881   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.524070   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.524297   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.524333   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.524420   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.524432   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:11.524442   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.524628   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:11.524666   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:11.524776   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:11.524845   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:11.524914   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:11.524978   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:11.525036   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:11.605338   17586 ssh_runner.go:195] Run: systemctl --version
	I1004 02:49:11.632586   17586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 02:49:11.794711   17586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 02:49:11.800775   17586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 02:49:11.800851   17586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 02:49:11.818575   17586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 02:49:11.818603   17586 start.go:495] detecting cgroup driver to use...
	I1004 02:49:11.818661   17586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 02:49:11.837900   17586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 02:49:11.853499   17586 docker.go:217] disabling cri-docker service (if available) ...
	I1004 02:49:11.853564   17586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 02:49:11.868702   17586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 02:49:11.883720   17586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 02:49:12.006317   17586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 02:49:12.172796   17586 docker.go:233] disabling docker service ...
	I1004 02:49:12.172875   17586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 02:49:12.188267   17586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 02:49:12.201665   17586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 02:49:12.331533   17586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 02:49:12.470603   17586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 02:49:12.485954   17586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 02:49:12.505763   17586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 02:49:12.505829   17586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:49:12.517242   17586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 02:49:12.517299   17586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:49:12.529098   17586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:49:12.540182   17586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:49:12.551326   17586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 02:49:12.562891   17586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:49:12.574005   17586 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:49:12.592358   17586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:49:12.603654   17586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 02:49:12.613663   17586 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 02:49:12.613728   17586 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 02:49:12.626763   17586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 02:49:12.637129   17586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:49:12.757602   17586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 02:49:12.852191   17586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 02:49:12.852270   17586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 02:49:12.856943   17586 start.go:563] Will wait 60s for crictl version
	I1004 02:49:12.857013   17586 ssh_runner.go:195] Run: which crictl
	I1004 02:49:12.860955   17586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 02:49:12.909268   17586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 02:49:12.909397   17586 ssh_runner.go:195] Run: crio --version
	I1004 02:49:12.939138   17586 ssh_runner.go:195] Run: crio --version
	I1004 02:49:12.970565   17586 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 02:49:12.972062   17586 main.go:141] libmachine: (addons-335265) Calling .GetIP
	I1004 02:49:12.974673   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:12.974998   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:12.975046   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:12.975247   17586 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 02:49:12.979596   17586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:49:12.992202   17586 kubeadm.go:883] updating cluster {Name:addons-335265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-335265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 02:49:12.992318   17586 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:49:12.992371   17586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:49:13.024977   17586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 02:49:13.025060   17586 ssh_runner.go:195] Run: which lz4
	I1004 02:49:13.029250   17586 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 02:49:13.033491   17586 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 02:49:13.033523   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 02:49:14.390272   17586 crio.go:462] duration metric: took 1.361058115s to copy over tarball
	I1004 02:49:14.390346   17586 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 02:49:16.621297   17586 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.230905703s)
	I1004 02:49:16.621327   17586 crio.go:469] duration metric: took 2.231020363s to extract the tarball
	I1004 02:49:16.621336   17586 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 02:49:16.657763   17586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:49:16.704887   17586 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 02:49:16.704911   17586 cache_images.go:84] Images are preloaded, skipping loading
	I1004 02:49:16.704924   17586 kubeadm.go:934] updating node { 192.168.39.175 8443 v1.31.1 crio true true} ...
	I1004 02:49:16.705024   17586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-335265 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-335265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 02:49:16.705105   17586 ssh_runner.go:195] Run: crio config
	I1004 02:49:16.752599   17586 cni.go:84] Creating CNI manager for ""
	I1004 02:49:16.752620   17586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:49:16.752629   17586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 02:49:16.752650   17586 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.175 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-335265 NodeName:addons-335265 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 02:49:16.752801   17586 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-335265"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 02:49:16.752893   17586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 02:49:16.763279   17586 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 02:49:16.763338   17586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 02:49:16.773228   17586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1004 02:49:16.791835   17586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 02:49:16.809879   17586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1004 02:49:16.828225   17586 ssh_runner.go:195] Run: grep 192.168.39.175	control-plane.minikube.internal$ /etc/hosts
	I1004 02:49:16.832408   17586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.175	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:49:16.845664   17586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:49:16.960830   17586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 02:49:16.978732   17586 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265 for IP: 192.168.39.175
	I1004 02:49:16.978752   17586 certs.go:194] generating shared ca certs ...
	I1004 02:49:16.978767   17586 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:16.978914   17586 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 02:49:17.255351   17586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt ...
	I1004 02:49:17.255388   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt: {Name:mk416c223763546798382e3c7879793784b195dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.255580   17586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key ...
	I1004 02:49:17.255593   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key: {Name:mk7b03a367acc8df80e5914cf093d4079eeff7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.255667   17586 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 02:49:17.344240   17586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt ...
	I1004 02:49:17.344268   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt: {Name:mk830b345da9508afe57eca6a4e1ca21dba647dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.344450   17586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key ...
	I1004 02:49:17.344468   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key: {Name:mk6d04a07117246d1d3824f24d28d81c1c93d061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.344579   17586 certs.go:256] generating profile certs ...
	I1004 02:49:17.344652   17586 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.key
	I1004 02:49:17.344681   17586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt with IP's: []
	I1004 02:49:17.435135   17586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt ...
	I1004 02:49:17.435170   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: {Name:mk534c4041233364f5de809317ca233dbe4111cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.435342   17586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.key ...
	I1004 02:49:17.435354   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.key: {Name:mk28bb05ec433e3b1aa54e512ad157bcefd823a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.435420   17586 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.key.1a707ed9
	I1004 02:49:17.435438   17586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.crt.1a707ed9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.175]
	I1004 02:49:17.528084   17586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.crt.1a707ed9 ...
	I1004 02:49:17.528115   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.crt.1a707ed9: {Name:mkf136cc463a971160b90826f670648c403a3599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.528280   17586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.key.1a707ed9 ...
	I1004 02:49:17.528295   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.key.1a707ed9: {Name:mkcf6196d67e4c9ec7e9bdd97058b1b2e144b2dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.528364   17586 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.crt.1a707ed9 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.crt
	I1004 02:49:17.528431   17586 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.key.1a707ed9 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.key
	I1004 02:49:17.528475   17586 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.key
	I1004 02:49:17.528491   17586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.crt with IP's: []
	I1004 02:49:17.816145   17586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.crt ...
	I1004 02:49:17.816180   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.crt: {Name:mkac5bc584424a73c1f4ef5cc082ab252c5dec3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.816335   17586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.key ...
	I1004 02:49:17.816346   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.key: {Name:mke8531f4884dc4e8612ff83e9a2c1a996031a54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.816519   17586 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 02:49:17.816558   17586 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 02:49:17.816584   17586 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 02:49:17.816608   17586 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 02:49:17.817134   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 02:49:17.849435   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 02:49:17.876023   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 02:49:17.901542   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 02:49:17.927993   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1004 02:49:17.954307   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 02:49:17.980129   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 02:49:18.006769   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 02:49:18.032633   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 02:49:18.057710   17586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 02:49:18.074969   17586 ssh_runner.go:195] Run: openssl version
	I1004 02:49:18.081094   17586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 02:49:18.092277   17586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:49:18.097133   17586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:49:18.097209   17586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:49:18.103473   17586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 02:49:18.114795   17586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 02:49:18.119211   17586 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 02:49:18.119274   17586 kubeadm.go:392] StartCluster: {Name:addons-335265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-335265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:49:18.119360   17586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 02:49:18.119407   17586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 02:49:18.155217   17586 cri.go:89] found id: ""
	I1004 02:49:18.155296   17586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 02:49:18.165622   17586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 02:49:18.175899   17586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 02:49:18.186061   17586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 02:49:18.186091   17586 kubeadm.go:157] found existing configuration files:
	
	I1004 02:49:18.186143   17586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 02:49:18.195765   17586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 02:49:18.195835   17586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 02:49:18.205944   17586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 02:49:18.215612   17586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 02:49:18.215687   17586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 02:49:18.225604   17586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 02:49:18.235547   17586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 02:49:18.235615   17586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 02:49:18.246173   17586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 02:49:18.255931   17586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 02:49:18.255990   17586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 02:49:18.266436   17586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 02:49:18.315072   17586 kubeadm.go:310] W1004 02:49:18.269357     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 02:49:18.315763   17586 kubeadm.go:310] W1004 02:49:18.270420     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 02:49:18.446332   17586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:49:29.008798   17586 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 02:49:29.008862   17586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 02:49:29.008964   17586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 02:49:29.009112   17586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 02:49:29.009215   17586 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 02:49:29.009293   17586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 02:49:29.010960   17586 out.go:235]   - Generating certificates and keys ...
	I1004 02:49:29.011038   17586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 02:49:29.011099   17586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 02:49:29.011192   17586 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 02:49:29.011277   17586 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1004 02:49:29.011356   17586 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1004 02:49:29.011412   17586 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1004 02:49:29.011491   17586 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1004 02:49:29.011637   17586 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-335265 localhost] and IPs [192.168.39.175 127.0.0.1 ::1]
	I1004 02:49:29.011713   17586 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1004 02:49:29.011894   17586 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-335265 localhost] and IPs [192.168.39.175 127.0.0.1 ::1]
	I1004 02:49:29.011997   17586 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 02:49:29.012099   17586 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 02:49:29.012186   17586 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1004 02:49:29.012281   17586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 02:49:29.012355   17586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 02:49:29.012435   17586 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 02:49:29.012516   17586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 02:49:29.012602   17586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 02:49:29.012686   17586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 02:49:29.012810   17586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 02:49:29.012895   17586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 02:49:29.014500   17586 out.go:235]   - Booting up control plane ...
	I1004 02:49:29.014611   17586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 02:49:29.014702   17586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 02:49:29.014786   17586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 02:49:29.014898   17586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 02:49:29.015013   17586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 02:49:29.015055   17586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 02:49:29.015187   17586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 02:49:29.015278   17586 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 02:49:29.015355   17586 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.07151ms
	I1004 02:49:29.015426   17586 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 02:49:29.015480   17586 kubeadm.go:310] [api-check] The API server is healthy after 5.50214285s
	I1004 02:49:29.015595   17586 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:49:29.015711   17586 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:49:29.015763   17586 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:49:29.015968   17586 kubeadm.go:310] [mark-control-plane] Marking the node addons-335265 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:49:29.016042   17586 kubeadm.go:310] [bootstrap-token] Using token: nfgnag.mugyjuqzatxni5xt
	I1004 02:49:29.017666   17586 out.go:235]   - Configuring RBAC rules ...
	I1004 02:49:29.017752   17586 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:49:29.017821   17586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:49:29.017932   17586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:49:29.018049   17586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:49:29.018154   17586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:49:29.018248   17586 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:49:29.018352   17586 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:49:29.018392   17586 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 02:49:29.018498   17586 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 02:49:29.018514   17586 kubeadm.go:310] 
	I1004 02:49:29.018596   17586 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 02:49:29.018606   17586 kubeadm.go:310] 
	I1004 02:49:29.018713   17586 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 02:49:29.018727   17586 kubeadm.go:310] 
	I1004 02:49:29.018761   17586 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 02:49:29.018836   17586 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:49:29.018883   17586 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:49:29.018889   17586 kubeadm.go:310] 
	I1004 02:49:29.018937   17586 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 02:49:29.018943   17586 kubeadm.go:310] 
	I1004 02:49:29.018983   17586 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:49:29.018990   17586 kubeadm.go:310] 
	I1004 02:49:29.019041   17586 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 02:49:29.019107   17586 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:49:29.019172   17586 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:49:29.019186   17586 kubeadm.go:310] 
	I1004 02:49:29.019292   17586 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:49:29.019355   17586 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 02:49:29.019362   17586 kubeadm.go:310] 
	I1004 02:49:29.019429   17586 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nfgnag.mugyjuqzatxni5xt \
	I1004 02:49:29.019511   17586 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 \
	I1004 02:49:29.019542   17586 kubeadm.go:310] 	--control-plane 
	I1004 02:49:29.019550   17586 kubeadm.go:310] 
	I1004 02:49:29.019625   17586 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:49:29.019633   17586 kubeadm.go:310] 
	I1004 02:49:29.019708   17586 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nfgnag.mugyjuqzatxni5xt \
	I1004 02:49:29.019872   17586 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 
	I1004 02:49:29.019890   17586 cni.go:84] Creating CNI manager for ""
	I1004 02:49:29.019899   17586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:49:29.021342   17586 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 02:49:29.022515   17586 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 02:49:29.034532   17586 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 02:49:29.055639   17586 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 02:49:29.055771   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:29.055820   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-335265 minikube.k8s.io/updated_at=2024_10_04T02_49_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=addons-335265 minikube.k8s.io/primary=true
	I1004 02:49:29.089558   17586 ops.go:34] apiserver oom_adj: -16
	I1004 02:49:29.211558   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:29.712413   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:30.211932   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:30.711890   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:31.212436   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:31.711915   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:32.212472   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:32.712083   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:33.211723   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:33.311027   17586 kubeadm.go:1113] duration metric: took 4.255321603s to wait for elevateKubeSystemPrivileges
	I1004 02:49:33.311068   17586 kubeadm.go:394] duration metric: took 15.191797173s to StartCluster
	I1004 02:49:33.311091   17586 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:33.311227   17586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 02:49:33.311629   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:33.311880   17586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 02:49:33.311897   17586 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:49:33.311956   17586 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:true metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1004 02:49:33.312081   17586 addons.go:69] Setting ingress=true in profile "addons-335265"
	I1004 02:49:33.312098   17586 addons.go:69] Setting yakd=true in profile "addons-335265"
	I1004 02:49:33.312115   17586 addons.go:234] Setting addon ingress=true in "addons-335265"
	I1004 02:49:33.312120   17586 config.go:182] Loaded profile config "addons-335265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 02:49:33.312130   17586 addons.go:234] Setting addon yakd=true in "addons-335265"
	I1004 02:49:33.312135   17586 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-335265"
	I1004 02:49:33.312152   17586 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-335265"
	I1004 02:49:33.312161   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312169   17586 addons.go:69] Setting inspektor-gadget=true in profile "addons-335265"
	I1004 02:49:33.312172   17586 addons.go:69] Setting default-storageclass=true in profile "addons-335265"
	I1004 02:49:33.312181   17586 addons.go:234] Setting addon inspektor-gadget=true in "addons-335265"
	I1004 02:49:33.312186   17586 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-335265"
	I1004 02:49:33.312173   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312204   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312226   17586 addons.go:69] Setting ingress-dns=true in profile "addons-335265"
	I1004 02:49:33.312220   17586 addons.go:69] Setting cloud-spanner=true in profile "addons-335265"
	I1004 02:49:33.312245   17586 addons.go:234] Setting addon ingress-dns=true in "addons-335265"
	I1004 02:49:33.312268   17586 addons.go:234] Setting addon cloud-spanner=true in "addons-335265"
	I1004 02:49:33.312288   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312304   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312623   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.312663   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.312668   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.312693   17586 addons.go:69] Setting logviewer=true in profile "addons-335265"
	I1004 02:49:33.312727   17586 addons.go:69] Setting registry=true in profile "addons-335265"
	I1004 02:49:33.312733   17586 addons.go:69] Setting volumesnapshots=true in profile "addons-335265"
	I1004 02:49:33.312737   17586 addons.go:69] Setting storage-provisioner=true in profile "addons-335265"
	I1004 02:49:33.312743   17586 addons.go:234] Setting addon registry=true in "addons-335265"
	I1004 02:49:33.312751   17586 addons.go:234] Setting addon storage-provisioner=true in "addons-335265"
	I1004 02:49:33.312751   17586 addons.go:69] Setting volcano=true in profile "addons-335265"
	I1004 02:49:33.312756   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.312766   17586 addons.go:234] Setting addon volcano=true in "addons-335265"
	I1004 02:49:33.312699   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.312772   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312786   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312799   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.312841   17586 addons.go:234] Setting addon logviewer=true in "addons-335265"
	I1004 02:49:33.312874   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312967   17586 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-335265"
	I1004 02:49:33.313066   17586 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-335265"
	I1004 02:49:33.313111   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312161   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.313154   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.312766   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.313195   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.313239   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.313262   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.313479   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.312737   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.313511   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.312744   17586 addons.go:234] Setting addon volumesnapshots=true in "addons-335265"
	I1004 02:49:33.313111   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.313567   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.313577   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.313595   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.313602   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.312709   17586 addons.go:69] Setting gcp-auth=true in profile "addons-335265"
	I1004 02:49:33.312720   17586 addons.go:69] Setting metrics-server=true in profile "addons-335265"
	I1004 02:49:33.312719   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.313644   17586 addons.go:234] Setting addon metrics-server=true in "addons-335265"
	I1004 02:49:33.313653   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.313579   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.313700   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.313723   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312716   17586 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-335265"
	I1004 02:49:33.313776   17586 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-335265"
	I1004 02:49:33.313821   17586 out.go:177] * Verifying Kubernetes components...
	I1004 02:49:33.313943   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.314116   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.314127   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.314143   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.314150   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.314345   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.314371   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.312700   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.314423   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.315292   17586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:49:33.313633   17586 mustload.go:65] Loading cluster: addons-335265
	I1004 02:49:33.348335   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I1004 02:49:33.351915   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
	I1004 02:49:33.351934   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44575
	I1004 02:49:33.352107   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I1004 02:49:33.352210   17586 config.go:182] Loaded profile config "addons-335265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 02:49:33.352302   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I1004 02:49:33.352718   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.352766   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.360638   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.360674   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.360722   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44043
	I1004 02:49:33.360742   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.360674   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.360781   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.361457   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.361482   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.361630   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.361646   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.361673   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.361680   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.361727   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.361829   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.361841   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.362018   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.362230   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.362249   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.362262   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.362433   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.362458   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.362521   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.362577   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.362627   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.362666   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.362725   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.362763   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.363519   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.363562   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.364709   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.364742   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.365106   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.365600   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.365632   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.368193   17586 addons.go:234] Setting addon default-storageclass=true in "addons-335265"
	I1004 02:49:33.368236   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.368662   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.368699   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.375574   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34451
	I1004 02:49:33.375593   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40171
	I1004 02:49:33.376334   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.376459   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.377291   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.377324   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.378059   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.378104   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.378345   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.378527   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.393762   17586 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-335265"
	I1004 02:49:33.393816   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.394039   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.394059   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I1004 02:49:33.394077   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41233
	I1004 02:49:33.394099   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I1004 02:49:33.394154   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.394190   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.394063   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.394505   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.394701   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.394780   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.395357   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.395395   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.395440   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.395455   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.395592   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.395612   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.396004   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.396007   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.396050   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.396499   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.396532   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.396585   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.396629   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.396760   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I1004 02:49:33.397994   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.398165   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.398178   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.398676   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.398692   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.399138   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.399199   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33809
	I1004 02:49:33.399922   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.399958   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.400187   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.400882   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.401397   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.401435   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.401815   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1004 02:49:33.402308   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.402324   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.402719   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.403261   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.403545   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.403562   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.403994   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.404029   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.404052   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.404684   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.404734   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.406031   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I1004 02:49:33.406510   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.406918   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.406942   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.407324   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.407506   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.409305   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.411570   17586 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1004 02:49:33.413027   17586 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1004 02:49:33.413046   17586 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1004 02:49:33.413067   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.416453   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.416573   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.416595   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.416864   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.417062   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.417217   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.417355   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.417904   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45481
	I1004 02:49:33.420406   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.421823   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.421854   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.421955   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43065
	I1004 02:49:33.422443   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.422967   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.422984   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.423330   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.423937   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.423975   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.424175   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35797
	I1004 02:49:33.424765   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.425184   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.425201   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.425495   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.426007   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.426042   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.427991   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I1004 02:49:33.428167   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.428348   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.428432   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.430243   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.430954   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.430977   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.431506   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.431661   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.432396   17586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1004 02:49:33.432661   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I1004 02:49:33.433301   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.433496   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.434065   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.434090   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.434519   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.434784   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.434911   17586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:49:33.435036   17586 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 02:49:33.436420   17586 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:49:33.436444   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 02:49:33.436462   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.436419   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.436888   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.436925   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.438326   17586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:49:33.439928   17586 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1004 02:49:33.439944   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1004 02:49:33.439959   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.440476   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.441921   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.442020   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.442664   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.442861   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.442965   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.443052   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.443709   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.444144   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.444171   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.444408   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.444581   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.444720   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.444854   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.447107   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
	I1004 02:49:33.447800   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.448406   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.448425   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.448830   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.449601   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.450352   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44265
	I1004 02:49:33.450967   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.451635   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.451652   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.451794   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.452580   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.452935   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.453729   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41531
	I1004 02:49:33.453844   17586 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1004 02:49:33.454185   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.454710   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.454727   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.455241   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.455366   17586 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1004 02:49:33.455383   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1004 02:49:33.455402   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.455620   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.457150   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I1004 02:49:33.457451   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.458078   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.458591   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.458624   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.459322   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1004 02:49:33.459424   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.459475   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33145
	I1004 02:49:33.459534   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.459739   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:33.459755   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:33.460006   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:33.460028   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:33.460037   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:33.460048   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:33.460339   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:33.460352   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	W1004 02:49:33.460448   17586 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1004 02:49:33.461650   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I1004 02:49:33.461659   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.461664   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I1004 02:49:33.461773   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1004 02:49:33.461815   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.462113   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.462211   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.462367   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.462387   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.462660   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.462678   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.462966   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.462981   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.463588   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.463618   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.463652   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.463705   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44895
	I1004 02:49:33.463919   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.464174   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.464236   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.464288   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.464442   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.464578   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1004 02:49:33.464595   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.464743   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.465110   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45621
	I1004 02:49:33.465143   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.465185   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.465500   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.465812   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.465989   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.466355   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.466223   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I1004 02:49:33.466732   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.466771   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.466776   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.466972   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.467057   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.467253   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.467268   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.467817   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1004 02:49:33.467847   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.468404   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.468509   17586 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1004 02:49:33.468613   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35309
	I1004 02:49:33.468759   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.468860   17586 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1004 02:49:33.469022   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.469309   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.469730   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.469751   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.469885   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.469945   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.470052   17586 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 02:49:33.470056   17586 out.go:177]   - Using image docker.io/ivans3/minikube-log-viewer:v1
	I1004 02:49:33.470068   17586 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 02:49:33.470160   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.470201   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.470052   17586 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1004 02:49:33.470222   17586 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1004 02:49:33.470238   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.470275   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.470286   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.470890   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.470948   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.471284   17586 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1004 02:49:33.471744   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.471360   17586 addons.go:431] installing /etc/kubernetes/addons/logviewer-dp-and-svc.yaml
	I1004 02:49:33.471804   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/logviewer-dp-and-svc.yaml (2016 bytes)
	I1004 02:49:33.471818   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.471580   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.472058   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.472255   17586 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1004 02:49:33.472422   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1004 02:49:33.473242   17586 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1004 02:49:33.473262   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1004 02:49:33.473279   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.473999   17586 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1004 02:49:33.474816   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.475094   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39363
	I1004 02:49:33.475296   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37935
	I1004 02:49:33.475402   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1004 02:49:33.475432   17586 out.go:177]   - Using image docker.io/registry:2.8.3
	I1004 02:49:33.475837   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.475532   17586 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1004 02:49:33.475932   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1004 02:49:33.475950   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.476012   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.476494   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1004 02:49:33.476498   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.477053   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.477088   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.477124   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.477471   17586 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1004 02:49:33.477491   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1004 02:49:33.477508   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.477893   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.477510   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.477714   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.478034   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.478051   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.477860   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.478078   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.478306   17586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1004 02:49:33.478323   17586 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1004 02:49:33.478340   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.478518   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.478523   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.478662   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.478677   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.478900   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.478934   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.479080   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.479144   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.479271   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.479308   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.479432   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.479726   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.479894   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.479914   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.480047   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1004 02:49:33.480417   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.480583   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.480708   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.480857   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.481054   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.481263   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.481422   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.481722   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.482103   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.482130   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.482288   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.482479   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.482590   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.482686   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.482919   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.483028   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1004 02:49:33.483242   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.483261   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.483384   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.483529   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.483662   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.483797   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.484040   17586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1004 02:49:33.484055   17586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1004 02:49:33.484070   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.484109   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.484121   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.484689   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.484745   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.484768   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.484784   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.484810   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.485085   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.485290   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.485398   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.485540   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.486093   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.486346   17586 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 02:49:33.486358   17586 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 02:49:33.486380   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.489049   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.490222   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.490263   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.490317   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.490342   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.490639   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.490668   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.490673   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.490840   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.490863   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.490982   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.491007   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.491130   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.491247   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.502831   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32895
	I1004 02:49:33.503255   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.503724   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.503743   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.504112   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.504279   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.506069   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.508108   17586 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1004 02:49:33.509789   17586 out.go:177]   - Using image docker.io/busybox:stable
	I1004 02:49:33.511272   17586 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1004 02:49:33.511290   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1004 02:49:33.511309   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.514611   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.515039   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.515068   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.515211   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.515342   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.515492   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.515706   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.886224   17586 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1004 02:49:33.886248   17586 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1004 02:49:33.896054   17586 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1004 02:49:33.896080   17586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1004 02:49:33.901319   17586 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 02:49:33.901339   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1004 02:49:33.945893   17586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 02:49:33.946344   17586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 02:49:33.948133   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1004 02:49:33.972688   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 02:49:33.987719   17586 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 02:49:33.987749   17586 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 02:49:33.990078   17586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1004 02:49:33.990103   17586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1004 02:49:34.005444   17586 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1004 02:49:34.005465   17586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1004 02:49:34.014318   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1004 02:49:34.030068   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:49:34.057095   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1004 02:49:34.085700   17586 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1004 02:49:34.085726   17586 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1004 02:49:34.090870   17586 addons.go:431] installing /etc/kubernetes/addons/logviewer-rbac.yaml
	I1004 02:49:34.090888   17586 ssh_runner.go:362] scp logviewer/logviewer-rbac.yaml --> /etc/kubernetes/addons/logviewer-rbac.yaml (1064 bytes)
	I1004 02:49:34.106271   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1004 02:49:34.111121   17586 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1004 02:49:34.111146   17586 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1004 02:49:34.112787   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1004 02:49:34.115049   17586 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1004 02:49:34.115067   17586 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1004 02:49:34.175071   17586 node_ready.go:35] waiting up to 6m0s for node "addons-335265" to be "Ready" ...
	I1004 02:49:34.181373   17586 node_ready.go:49] node "addons-335265" has status "Ready":"True"
	I1004 02:49:34.181409   17586 node_ready.go:38] duration metric: took 6.298512ms for node "addons-335265" to be "Ready" ...
	I1004 02:49:34.181421   17586 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:49:34.191489   17586 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2nft6" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:34.231218   17586 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1004 02:49:34.231250   17586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1004 02:49:34.245776   17586 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:49:34.245808   17586 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 02:49:34.261310   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/logviewer-dp-and-svc.yaml -f /etc/kubernetes/addons/logviewer-rbac.yaml
	I1004 02:49:34.266569   17586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1004 02:49:34.266596   17586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1004 02:49:34.330346   17586 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1004 02:49:34.330376   17586 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1004 02:49:34.345074   17586 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1004 02:49:34.345103   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1004 02:49:34.453371   17586 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1004 02:49:34.453405   17586 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1004 02:49:34.468098   17586 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1004 02:49:34.468131   17586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1004 02:49:34.493561   17586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1004 02:49:34.493586   17586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1004 02:49:34.497625   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:49:34.598033   17586 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1004 02:49:34.598065   17586 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1004 02:49:34.604066   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1004 02:49:34.719646   17586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1004 02:49:34.719672   17586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1004 02:49:34.720247   17586 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1004 02:49:34.720262   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1004 02:49:34.757085   17586 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1004 02:49:34.757106   17586 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1004 02:49:34.791069   17586 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1004 02:49:34.791096   17586 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1004 02:49:34.971059   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1004 02:49:34.994816   17586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1004 02:49:34.994840   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1004 02:49:35.065592   17586 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:49:35.065612   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1004 02:49:35.147113   17586 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1004 02:49:35.147136   17586 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1004 02:49:35.406831   17586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1004 02:49:35.406862   17586 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1004 02:49:35.528200   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:49:35.584032   17586 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1004 02:49:35.584062   17586 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1004 02:49:35.748491   17586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1004 02:49:35.748511   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1004 02:49:35.890310   17586 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1004 02:49:35.890337   17586 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1004 02:49:36.077004   17586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1004 02:49:36.077032   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1004 02:49:36.146287   17586 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1004 02:49:36.146308   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1004 02:49:36.200452   17586 pod_ready.go:103] pod "coredns-7c65d6cfc9-2nft6" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:36.277013   17586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1004 02:49:36.277037   17586 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1004 02:49:36.430708   17586 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.484328726s)
	I1004 02:49:36.430738   17586 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1004 02:49:36.510247   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1004 02:49:36.552653   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1004 02:49:36.950073   17586 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-335265" context rescaled to 1 replicas
	I1004 02:49:38.239191   17586 pod_ready.go:103] pod "coredns-7c65d6cfc9-2nft6" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:40.493877   17586 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1004 02:49:40.493919   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:40.496548   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:40.496948   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:40.496979   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:40.497135   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:40.497363   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:40.497533   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:40.497749   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:40.730094   17586 pod_ready.go:103] pod "coredns-7c65d6cfc9-2nft6" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:40.914225   17586 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1004 02:49:41.190464   17586 addons.go:234] Setting addon gcp-auth=true in "addons-335265"
	I1004 02:49:41.190518   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:41.190858   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:41.190896   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:41.206666   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
	I1004 02:49:41.207572   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:41.208149   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:41.208172   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:41.208495   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:41.209020   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:41.209049   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:41.224402   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42151
	I1004 02:49:41.224876   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:41.225406   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:41.225431   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:41.225703   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:41.225862   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:41.227358   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:41.227551   17586 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1004 02:49:41.227575   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:41.230940   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:41.231413   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:41.231443   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:41.231597   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:41.231765   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:41.231910   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:41.232029   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:41.720463   17586 pod_ready.go:93] pod "coredns-7c65d6cfc9-2nft6" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.720489   17586 pod_ready.go:82] duration metric: took 7.528971184s for pod "coredns-7c65d6cfc9-2nft6" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.720501   17586 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vms49" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.754679   17586 pod_ready.go:93] pod "coredns-7c65d6cfc9-vms49" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.754705   17586 pod_ready.go:82] duration metric: took 34.19619ms for pod "coredns-7c65d6cfc9-vms49" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.754718   17586 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.785086   17586 pod_ready.go:93] pod "etcd-addons-335265" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.785108   17586 pod_ready.go:82] duration metric: took 30.383054ms for pod "etcd-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.785119   17586 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.799731   17586 pod_ready.go:93] pod "kube-apiserver-addons-335265" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.799756   17586 pod_ready.go:82] duration metric: took 14.628834ms for pod "kube-apiserver-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.799769   17586 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:42.322999   17586 pod_ready.go:93] pod "kube-controller-manager-addons-335265" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:42.323021   17586 pod_ready.go:82] duration metric: took 523.243938ms for pod "kube-controller-manager-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:42.323036   17586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sl5bg" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:42.539515   17586 pod_ready.go:93] pod "kube-proxy-sl5bg" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:42.539561   17586 pod_ready.go:82] duration metric: took 216.497077ms for pod "kube-proxy-sl5bg" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:42.539573   17586 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:42.919968   17586 pod_ready.go:93] pod "kube-scheduler-addons-335265" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:42.919990   17586 pod_ready.go:82] duration metric: took 380.410368ms for pod "kube-scheduler-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:42.919997   17586 pod_ready.go:39] duration metric: took 8.738564467s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:49:42.920012   17586 api_server.go:52] waiting for apiserver process to appear ...
	I1004 02:49:42.920058   17586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 02:49:42.953040   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.004872963s)
	I1004 02:49:42.953058   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.980343573s)
	I1004 02:49:42.953090   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953101   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953114   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.938767641s)
	I1004 02:49:42.953129   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953138   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953164   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953179   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953201   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.923099395s)
	I1004 02:49:42.953237   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953265   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953298   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.896169977s)
	I1004 02:49:42.953337   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953348   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953359   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.953372   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.953379   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953386   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953433   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.953444   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.953444   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.847143597s)
	I1004 02:49:42.953453   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953462   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953491   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.840686228s)
	I1004 02:49:42.953494   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.953461   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953508   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953512   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953518   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953537   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.953444   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.953541   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.953546   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.953549   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.953555   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953557   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953562   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953564   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953589   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/logviewer-dp-and-svc.yaml -f /etc/kubernetes/addons/logviewer-rbac.yaml: (8.69225589s)
	I1004 02:49:42.953603   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953611   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953673   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.456016985s)
	I1004 02:49:42.953691   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953700   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953933   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.349836434s)
	I1004 02:49:42.953953   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953966   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.954037   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.982953657s)
	I1004 02:49:42.954056   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.954064   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.954183   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.425954315s)
	W1004 02:49:42.954208   17586 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1004 02:49:42.954237   17586 retry.go:31] will retry after 203.667463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1004 02:49:42.954346   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.444061841s)
	I1004 02:49:42.954363   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.954371   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.957760   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.957761   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.957796   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.957803   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.957808   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.957811   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.957819   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.957826   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.957866   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.957875   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.957881   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.957889   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.957894   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.957896   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.957920   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.957927   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.957931   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.957934   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.957938   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.957941   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.957945   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.957952   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.957987   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958000   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958007   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958013   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958020   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958027   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958049   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.958056   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.958104   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958128   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958134   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958141   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.958147   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.958203   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958212   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958219   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.958225   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.958267   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958286   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958292   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958301   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.958307   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.958342   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958360   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958366   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958430   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958448   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958456   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958465   17586 addons.go:475] Verifying addon ingress=true in "addons-335265"
	I1004 02:49:42.958688   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958709   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958731   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958737   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958782   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958802   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958810   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958818   17586 addons.go:475] Verifying addon registry=true in "addons-335265"
	I1004 02:49:42.958901   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958927   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958933   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.959154   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.959177   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.960610   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.959193   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.959210   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.960667   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.959233   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.960702   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.959336   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.959492   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.960908   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.960986   17586 out.go:177] * Verifying registry addon...
	I1004 02:49:42.959762   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.959801   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.961043   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.961067   17586 addons.go:475] Verifying addon metrics-server=true in "addons-335265"
	I1004 02:49:42.961097   17586 out.go:177] * Verifying ingress addon...
	I1004 02:49:42.962070   17586 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-335265 service yakd-dashboard -n yakd-dashboard
	
	I1004 02:49:42.963864   17586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1004 02:49:42.963868   17586 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1004 02:49:42.974966   17586 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1004 02:49:42.974999   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:42.979384   17586 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1004 02:49:42.979401   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:43.000454   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:43.000477   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:43.000728   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:43.000779   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:43.000793   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	W1004 02:49:43.000896   17586 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1004 02:49:43.012240   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:43.012257   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:43.012490   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:43.012508   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:43.158991   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:49:43.473236   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:43.473417   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:43.995974   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:43.996523   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:44.330915   17586 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.103343121s)
	I1004 02:49:44.330944   17586 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.410869693s)
	I1004 02:49:44.330967   17586 api_server.go:72] duration metric: took 11.019046059s to wait for apiserver process to appear ...
	I1004 02:49:44.330975   17586 api_server.go:88] waiting for apiserver healthz status ...
	I1004 02:49:44.330996   17586 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1004 02:49:44.330911   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.778208695s)
	I1004 02:49:44.331140   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:44.331160   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:44.331471   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:44.331481   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:44.331544   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:44.331557   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:44.331564   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:44.331800   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:44.331814   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:44.331823   17586 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-335265"
	I1004 02:49:44.331830   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:44.332664   17586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:49:44.333537   17586 out.go:177] * Verifying csi-hostpath-driver addon...
	I1004 02:49:44.335020   17586 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1004 02:49:44.335757   17586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1004 02:49:44.336178   17586 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1004 02:49:44.336194   17586 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1004 02:49:44.342296   17586 api_server.go:279] https://192.168.39.175:8443/healthz returned 200:
	ok
	I1004 02:49:44.343523   17586 api_server.go:141] control plane version: v1.31.1
	I1004 02:49:44.343552   17586 api_server.go:131] duration metric: took 12.569393ms to wait for apiserver health ...
	I1004 02:49:44.343563   17586 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 02:49:44.348725   17586 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1004 02:49:44.348756   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:44.359730   17586 system_pods.go:59] 19 kube-system pods found
	I1004 02:49:44.359775   17586 system_pods.go:61] "coredns-7c65d6cfc9-2nft6" [010ae061-9933-4fcb-bb73-9c9607bea03e] Running
	I1004 02:49:44.359802   17586 system_pods.go:61] "coredns-7c65d6cfc9-vms49" [7ae77679-4aea-4650-b804-4b62d483ceb2] Running
	I1004 02:49:44.359815   17586 system_pods.go:61] "csi-hostpath-attacher-0" [f8cef70e-6711-4e45-986e-990453722a26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1004 02:49:44.359824   17586 system_pods.go:61] "csi-hostpath-resizer-0" [2dc3007a-e1ee-4845-88b8-512ac894863d] Pending
	I1004 02:49:44.359841   17586 system_pods.go:61] "csi-hostpathplugin-fzd54" [b04e23ab-8e0e-416e-8280-7dee1a52b8a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1004 02:49:44.359852   17586 system_pods.go:61] "etcd-addons-335265" [b1eb136d-5c61-4604-93df-2b7b04a05254] Running
	I1004 02:49:44.359861   17586 system_pods.go:61] "kube-apiserver-addons-335265" [1381dd5e-1b56-4429-93be-d878c04cb93c] Running
	I1004 02:49:44.359871   17586 system_pods.go:61] "kube-controller-manager-addons-335265" [6e317c8f-8e29-4a47-940d-1ca2ae208303] Running
	I1004 02:49:44.359883   17586 system_pods.go:61] "kube-ingress-dns-minikube" [3684f708-8dec-41cd-b503-58a74f8f3df3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1004 02:49:44.359893   17586 system_pods.go:61] "kube-proxy-sl5bg" [03727f31-3609-4d9c-ba1d-da91df4ce689] Running
	I1004 02:49:44.359900   17586 system_pods.go:61] "kube-scheduler-addons-335265" [9e73330c-1229-4615-b08a-ac733c781949] Running
	I1004 02:49:44.359913   17586 system_pods.go:61] "logviewer-7c79c8bcc9-ddvsm" [eaf2b3b6-6d22-4038-8bdc-d56ceebb3cb6] Pending / Ready:ContainersNotReady (containers with unready status: [logviewer]) / ContainersReady:ContainersNotReady (containers with unready status: [logviewer])
	I1004 02:49:44.359925   17586 system_pods.go:61] "metrics-server-84c5f94fbc-gqwd8" [6e302061-d82b-4ce2-b712-1faed975bc09] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 02:49:44.359940   17586 system_pods.go:61] "nvidia-device-plugin-daemonset-hk8t5" [9fc5b35d-0561-41df-ae69-27953695f6e2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1004 02:49:44.359954   17586 system_pods.go:61] "registry-66c9cd494c-nfhcd" [bf27c03f-b1e2-412d-a96b-4bb669dd6fd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1004 02:49:44.359967   17586 system_pods.go:61] "registry-proxy-csj4d" [b56921d1-efcc-463f-9f04-40fd7fde1775] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1004 02:49:44.359981   17586 system_pods.go:61] "snapshot-controller-56fcc65765-52lpd" [57e9c889-df7e-43b8-9cec-8ce9e6caaa21] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 02:49:44.359995   17586 system_pods.go:61] "snapshot-controller-56fcc65765-zkf5w" [68acf020-754d-4bb3-8793-bfd1aa2974dc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 02:49:44.360006   17586 system_pods.go:61] "storage-provisioner" [4f2eee80-691d-47ad-98f8-c06185ac9dec] Running
	I1004 02:49:44.360020   17586 system_pods.go:74] duration metric: took 16.443666ms to wait for pod list to return data ...
	I1004 02:49:44.360040   17586 default_sa.go:34] waiting for default service account to be created ...
	I1004 02:49:44.370151   17586 default_sa.go:45] found service account: "default"
	I1004 02:49:44.370264   17586 default_sa.go:55] duration metric: took 10.13041ms for default service account to be created ...
	I1004 02:49:44.370299   17586 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 02:49:44.420487   17586 system_pods.go:86] 19 kube-system pods found
	I1004 02:49:44.420516   17586 system_pods.go:89] "coredns-7c65d6cfc9-2nft6" [010ae061-9933-4fcb-bb73-9c9607bea03e] Running
	I1004 02:49:44.420523   17586 system_pods.go:89] "coredns-7c65d6cfc9-vms49" [7ae77679-4aea-4650-b804-4b62d483ceb2] Running
	I1004 02:49:44.420530   17586 system_pods.go:89] "csi-hostpath-attacher-0" [f8cef70e-6711-4e45-986e-990453722a26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1004 02:49:44.420536   17586 system_pods.go:89] "csi-hostpath-resizer-0" [2dc3007a-e1ee-4845-88b8-512ac894863d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1004 02:49:44.420548   17586 system_pods.go:89] "csi-hostpathplugin-fzd54" [b04e23ab-8e0e-416e-8280-7dee1a52b8a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1004 02:49:44.420554   17586 system_pods.go:89] "etcd-addons-335265" [b1eb136d-5c61-4604-93df-2b7b04a05254] Running
	I1004 02:49:44.420561   17586 system_pods.go:89] "kube-apiserver-addons-335265" [1381dd5e-1b56-4429-93be-d878c04cb93c] Running
	I1004 02:49:44.420568   17586 system_pods.go:89] "kube-controller-manager-addons-335265" [6e317c8f-8e29-4a47-940d-1ca2ae208303] Running
	I1004 02:49:44.420576   17586 system_pods.go:89] "kube-ingress-dns-minikube" [3684f708-8dec-41cd-b503-58a74f8f3df3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1004 02:49:44.420588   17586 system_pods.go:89] "kube-proxy-sl5bg" [03727f31-3609-4d9c-ba1d-da91df4ce689] Running
	I1004 02:49:44.420593   17586 system_pods.go:89] "kube-scheduler-addons-335265" [9e73330c-1229-4615-b08a-ac733c781949] Running
	I1004 02:49:44.420610   17586 system_pods.go:89] "logviewer-7c79c8bcc9-ddvsm" [eaf2b3b6-6d22-4038-8bdc-d56ceebb3cb6] Pending / Ready:ContainersNotReady (containers with unready status: [logviewer]) / ContainersReady:ContainersNotReady (containers with unready status: [logviewer])
	I1004 02:49:44.420621   17586 system_pods.go:89] "metrics-server-84c5f94fbc-gqwd8" [6e302061-d82b-4ce2-b712-1faed975bc09] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 02:49:44.420627   17586 system_pods.go:89] "nvidia-device-plugin-daemonset-hk8t5" [9fc5b35d-0561-41df-ae69-27953695f6e2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1004 02:49:44.420635   17586 system_pods.go:89] "registry-66c9cd494c-nfhcd" [bf27c03f-b1e2-412d-a96b-4bb669dd6fd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1004 02:49:44.420641   17586 system_pods.go:89] "registry-proxy-csj4d" [b56921d1-efcc-463f-9f04-40fd7fde1775] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1004 02:49:44.420650   17586 system_pods.go:89] "snapshot-controller-56fcc65765-52lpd" [57e9c889-df7e-43b8-9cec-8ce9e6caaa21] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 02:49:44.420664   17586 system_pods.go:89] "snapshot-controller-56fcc65765-zkf5w" [68acf020-754d-4bb3-8793-bfd1aa2974dc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 02:49:44.420673   17586 system_pods.go:89] "storage-provisioner" [4f2eee80-691d-47ad-98f8-c06185ac9dec] Running
	I1004 02:49:44.420683   17586 system_pods.go:126] duration metric: took 50.372052ms to wait for k8s-apps to be running ...
	I1004 02:49:44.420695   17586 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 02:49:44.420742   17586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:49:44.481983   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:44.482039   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:44.487276   17586 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1004 02:49:44.487302   17586 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1004 02:49:44.594177   17586 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1004 02:49:44.594202   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1004 02:49:44.682078   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1004 02:49:44.840475   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:44.968377   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:44.968695   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:45.340040   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:45.432078   17586 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.011312805s)
	I1004 02:49:45.432117   17586 system_svc.go:56] duration metric: took 1.011419417s WaitForService to wait for kubelet
	I1004 02:49:45.432139   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.273032903s)
	I1004 02:49:45.432133   17586 kubeadm.go:582] duration metric: took 12.12020663s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 02:49:45.432159   17586 node_conditions.go:102] verifying NodePressure condition ...
	I1004 02:49:45.432190   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:45.432209   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:45.432463   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:45.432540   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:45.432559   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:45.432572   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:45.432578   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:45.432756   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:45.432796   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:45.432821   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:45.435775   17586 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 02:49:45.435814   17586 node_conditions.go:123] node cpu capacity is 2
	I1004 02:49:45.435827   17586 node_conditions.go:105] duration metric: took 3.661104ms to run NodePressure ...
	I1004 02:49:45.435840   17586 start.go:241] waiting for startup goroutines ...
	I1004 02:49:45.467929   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:45.469005   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:45.843625   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:45.993157   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:45.995325   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:46.235243   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.553118798s)
	I1004 02:49:46.235311   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:46.235327   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:46.235624   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:46.235641   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:46.235661   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:46.235697   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:46.235724   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:46.235930   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:46.235945   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:46.237482   17586 addons.go:475] Verifying addon gcp-auth=true in "addons-335265"
	I1004 02:49:46.239425   17586 out.go:177] * Verifying gcp-auth addon...
	I1004 02:49:46.241630   17586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1004 02:49:46.266963   17586 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1004 02:49:46.266983   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:46.350894   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:46.470052   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:46.470438   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:46.745485   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:46.841096   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:46.969566   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:46.970253   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:47.245437   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:47.347536   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:47.469779   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:47.470410   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:47.745449   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:47.840515   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:47.969775   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:47.971044   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:48.245171   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:48.340181   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:48.471580   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:48.471703   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:48.745266   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:48.843475   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:48.969449   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:48.969460   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:49.245802   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:49.341658   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:49.473759   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:49.474028   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:49.745751   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:49.841068   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:49.968769   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:49.969154   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:50.246074   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:50.339806   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:50.467775   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:50.469022   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:50.746574   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:50.840723   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:50.968464   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:50.968844   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:51.245619   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:51.343367   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:51.469611   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:51.469900   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:51.745723   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:51.840565   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:51.968983   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:51.969499   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:52.245248   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:52.340454   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:52.469211   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:52.469569   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:52.746021   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:52.841590   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:52.968317   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:52.968621   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:53.245560   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:53.340508   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:53.468580   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:53.473365   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:53.745845   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:53.841349   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:53.968856   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:53.969095   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:54.247577   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:54.341801   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:54.469627   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:54.471510   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:54.746671   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:54.841323   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:54.969621   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:54.969924   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:55.246190   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:55.341193   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:55.468975   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:55.469299   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:55.745364   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:55.840919   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:55.969021   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:55.969653   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:56.245144   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:56.340908   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:56.469542   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:56.469693   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:56.746668   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:56.840682   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:56.969422   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:56.970860   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:57.245749   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:57.340649   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:57.469240   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:57.469611   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:57.745759   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:57.841098   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:57.969593   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:57.970016   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:58.245128   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:58.341030   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:58.469209   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:58.469410   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:58.773786   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:58.874775   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:58.969215   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:58.969520   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:59.244986   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:59.339894   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:59.468115   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:59.470160   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:59.745355   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:59.841059   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:59.968810   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:59.969102   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:00.246067   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:00.340802   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:00.468135   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:00.468380   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:00.745614   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:00.841688   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:00.969241   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:00.969728   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:01.245662   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:01.340316   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:01.468953   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:01.469229   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:01.744731   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:01.840737   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:01.967705   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:01.968480   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:02.245302   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:02.340973   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:02.469424   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:02.469832   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:02.752146   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:02.841602   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:02.968518   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:02.968602   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:03.246014   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:03.339758   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:03.468625   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:03.468701   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:03.746128   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:03.840952   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:03.969137   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:03.969356   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:04.245669   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:04.340505   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:04.469434   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:04.470367   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:04.745271   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:04.844414   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:04.968983   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:04.969858   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:05.245975   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:05.341707   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:05.469509   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:05.471502   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:05.744938   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:05.841778   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:05.969657   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:05.969788   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:06.246060   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:06.342060   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:06.469180   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:06.469183   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:06.745776   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:06.840758   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:06.968790   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:06.969009   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:07.245528   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:07.340020   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:07.468969   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:07.469044   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:07.744940   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:07.841369   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:07.969128   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:07.969181   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:08.245230   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:08.340571   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:08.469010   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:08.469908   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:08.745616   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:08.840736   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:08.968971   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:08.969575   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:09.245326   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:09.359136   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:09.467813   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:09.468992   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:09.746293   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:09.840450   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:09.969665   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:09.970228   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:10.244757   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:10.341541   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:10.469322   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:10.469662   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:10.746219   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:10.842293   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:10.969724   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:10.970388   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:11.244701   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:11.340876   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:11.469396   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:11.469799   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:11.745573   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:11.841776   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:11.968419   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:11.968720   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:12.246376   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:12.342276   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:12.469324   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:12.469727   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:12.745846   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:12.841395   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:12.975823   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:12.975989   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:13.247657   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:13.340541   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:13.468808   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:13.468936   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:13.745354   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:13.841186   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:13.968444   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:13.969007   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:14.246708   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:14.341716   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:14.468737   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:14.469321   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:14.745375   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:14.969132   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:14.970969   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:14.974481   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:15.245131   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:15.339965   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:15.468131   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:15.469116   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:15.745906   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:15.847336   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:15.968989   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:15.969125   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:16.246214   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:16.342784   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:16.468859   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:16.469534   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:16.744824   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:16.841964   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:16.968490   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:16.969554   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:17.245778   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:17.341170   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:17.468737   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:17.469234   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:17.745719   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:17.840537   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:17.968710   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:17.970973   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:18.245827   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:18.340600   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:18.468927   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:18.469056   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:18.745894   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:18.840993   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:18.968662   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:18.969588   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:19.245724   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:19.347561   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:19.469658   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:19.470830   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:19.746015   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:19.841746   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:19.969265   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:19.969911   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:20.246107   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:20.340671   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:20.469023   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:20.469306   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:20.745276   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:20.840320   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:20.968182   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:20.968618   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:21.295214   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:21.342350   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:21.474232   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:21.474704   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:21.746019   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:21.840073   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:21.968686   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:21.968883   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:22.245462   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:22.340560   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:22.469308   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:22.469628   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:22.745548   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:22.840729   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:22.968686   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:22.970021   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:23.244972   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:23.342661   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:23.468579   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:23.469885   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:23.745492   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:23.842311   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:23.968652   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:23.969795   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:24.245950   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:24.341738   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:24.468766   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:24.469203   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:24.746105   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:24.840645   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:24.968939   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:24.969048   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:25.245098   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:25.341531   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:25.468997   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:25.469022   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:25.745274   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:25.840707   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:25.968510   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:25.968829   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:26.245912   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:26.341014   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:26.468615   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:26.469002   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:26.746277   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:26.840693   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:26.971379   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:26.971812   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:27.255084   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:27.343350   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:27.469658   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:27.470073   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:27.746467   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:27.841488   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:27.968366   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:27.969354   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:28.245274   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:28.340766   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:28.468478   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:28.468863   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:28.745715   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:28.842149   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:28.969001   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:28.969558   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:29.246174   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:29.340701   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:29.470362   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:29.470409   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:29.866701   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:29.867799   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:29.968679   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:29.969428   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:30.245747   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:30.341416   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:30.468295   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:30.468513   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:30.745295   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:30.840316   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:30.968994   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:30.969291   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:31.260366   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:31.363925   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:31.601991   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:31.602101   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:31.746832   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:31.843222   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:31.969229   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:31.969897   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:32.246611   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:32.340701   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:32.469558   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:32.470623   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:32.748816   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:32.840852   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:32.967969   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:32.968308   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:33.247615   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:33.341740   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:33.468781   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:33.469050   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:33.745474   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:33.841085   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:33.968170   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:33.968693   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:34.246264   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:34.340367   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:34.468299   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:34.468747   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:34.746055   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:34.842944   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:34.968409   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:34.969114   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:35.245369   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:35.340753   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:35.471264   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:35.471350   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:35.745222   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:35.840472   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:35.968223   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:35.968365   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:36.246074   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:36.340086   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:36.468055   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:36.468198   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:36.744977   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:36.839883   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:36.968208   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:36.968525   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:37.245512   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:37.340720   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:37.472391   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:37.472806   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:37.747473   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:37.840859   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:37.968649   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:37.969217   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:38.244944   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:38.340831   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:38.469874   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:38.470270   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:38.747502   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:38.848515   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:38.968242   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:38.969019   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:39.246532   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:39.340418   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:39.468725   17586 kapi.go:107] duration metric: took 56.504858137s to wait for kubernetes.io/minikube-addons=registry ...
	I1004 02:50:39.469188   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:39.745137   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:39.840510   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:39.969117   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:40.245600   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:40.340848   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:40.468785   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:40.745843   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:40.841338   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:40.969196   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:41.245489   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:41.340451   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:41.468737   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:41.745759   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:41.841399   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:41.968472   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:42.245329   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:42.440188   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:42.469128   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:42.745847   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:42.840815   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:42.968614   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:43.245330   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:43.339936   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:43.470229   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:43.748229   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:43.847409   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:43.969405   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:44.245823   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:44.347707   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:44.469145   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:44.745765   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:44.840690   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:44.968922   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:45.245848   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:45.341333   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:45.474248   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:45.746235   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:45.840861   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:45.968763   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:46.247265   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:46.343845   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:46.467461   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:46.745527   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:46.840750   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:46.969455   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:47.245470   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:47.340916   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:47.484243   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:47.746347   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:47.848367   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:47.969120   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:48.245579   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:48.340848   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:48.468256   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:48.747428   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:48.840805   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:48.971165   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:49.245816   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:49.348301   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:49.476521   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:49.745694   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:49.841599   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:49.968223   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:50.246595   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:50.340959   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:50.468104   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:50.746347   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:50.848063   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:50.969292   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:51.247208   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:51.340371   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:51.469108   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:51.746057   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:51.849793   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:51.969072   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:52.244831   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:52.341056   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:52.469322   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:52.745196   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:52.840563   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:53.111192   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:53.244936   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:53.343976   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:53.471545   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:53.746426   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:53.843193   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:53.970397   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:54.247890   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:54.341959   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:54.474960   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:54.746933   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:54.841441   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:54.967989   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:55.247582   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:55.340153   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:55.468318   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:55.746499   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:55.841750   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:55.973236   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:56.249197   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:56.340171   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:56.876849   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:56.877209   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:56.879041   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:56.973709   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:57.245308   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:57.340209   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:57.468048   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:57.746390   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:57.840908   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:57.967644   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:58.245592   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:58.341448   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:58.468067   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:58.751205   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:58.846188   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:58.970831   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:59.246278   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:59.340430   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:59.857166   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:59.859292   17586 kapi.go:107] duration metric: took 1m16.895419035s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1004 02:50:59.861007   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:00.246687   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:00.341082   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:00.745779   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:00.841570   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:01.247221   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:01.340026   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:01.746716   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:01.848403   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:02.252681   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:02.362907   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:02.754433   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:02.842090   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:03.245984   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:03.341521   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:03.746257   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:03.840435   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:04.245794   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:04.341543   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:04.746584   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:04.847725   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:05.246486   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:05.341369   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:05.745443   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:05.840590   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:06.245764   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:06.341452   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:06.746300   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:06.840173   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:07.245974   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:07.342299   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:07.746320   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:07.840606   17586 kapi.go:107] duration metric: took 1m23.504844495s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1004 02:51:08.245959   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:08.745269   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:09.246363   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:09.746099   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:10.246354   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:10.746224   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:11.246454   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:11.744947   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:12.246099   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:12.746126   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:13.245694   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:13.746043   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:14.246221   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:14.746127   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:15.246082   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:15.746023   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:16.245963   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:16.746251   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:17.245749   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:17.746499   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:18.245972   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:18.746574   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:19.245494   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:19.745988   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:20.245924   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:20.745774   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:21.245810   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:21.745368   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:22.246344   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:22.746450   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:23.246182   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:23.746471   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:24.246589   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:24.745434   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:25.247295   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:25.746293   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:26.246630   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:26.745524   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:27.245748   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:27.746316   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:28.246829   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:28.745571   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:29.246166   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:29.746103   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:30.246477   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:30.746349   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:31.246973   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:31.745800   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:32.246183   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:32.746732   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:33.245535   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:33.745112   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:34.246490   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:34.745218   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:35.245995   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:35.745573   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:36.245361   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:36.746021   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:37.245923   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:37.745458   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:38.245995   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:38.746059   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:39.246387   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:39.745838   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:40.245935   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:40.745542   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:41.246971   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:41.745681   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:42.245593   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:42.746860   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:43.245745   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:43.746238   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:44.246420   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:44.746678   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:45.245206   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:45.746024   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:46.245747   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:46.745220   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:47.246406   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:47.746778   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:48.245804   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:48.748224   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:49.246411   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:49.746311   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:50.246717   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:50.745550   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:51.246298   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:51.746806   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:52.246046   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:52.745889   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:53.245684   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:53.745288   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:54.247609   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:54.745850   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:55.245776   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:55.746173   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:56.246715   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:56.745348   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:57.246410   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:57.746240   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:58.246472   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:58.745882   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:59.245926   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:59.745973   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:00.250275   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:00.746630   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:01.245231   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:01.746948   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:02.246143   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:02.746051   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:03.246149   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:03.746166   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:04.248198   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:04.745864   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:05.245404   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:05.744901   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:06.245761   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:06.745743   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:07.244993   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:07.745878   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:08.245943   17586 kapi.go:107] duration metric: took 2m22.004306963s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1004 02:52:08.248036   17586 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-335265 cluster.
	I1004 02:52:08.249477   17586 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1004 02:52:08.250882   17586 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1004 02:52:08.252147   17586 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, logviewer, inspektor-gadget, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1004 02:52:08.253497   17586 addons.go:510] duration metric: took 2m34.941548087s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin logviewer inspektor-gadget cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1004 02:52:08.253539   17586 start.go:246] waiting for cluster config update ...
	I1004 02:52:08.253559   17586 start.go:255] writing updated cluster config ...
	I1004 02:52:08.253804   17586 ssh_runner.go:195] Run: rm -f paused
	I1004 02:52:08.314013   17586 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 02:52:08.315923   17586 out.go:177] * Done! kubectl is now configured to use "addons-335265" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.898576814Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011001898551473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585094,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7eaadc3-1d31-47fb-a565-f02e542130da name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.899238638Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a080b67e-7fb2-4f8f-a5f2-cc985881d127 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.899365061Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a080b67e-7fb2-4f8f-a5f2-cc985881d127 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.899688429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:105abce822ec642566111ef308ea64f204cf359d5e083b76e3cbd45dfea09c1f,PodSandboxId:6deed65f66ba17a0e483da06dc0240e649915bc317b911ecb9c3d2a227c66639,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728010977911148213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea289386-a580-4a9e-ba94-c28adf57b2a0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f80532df2b54eeca5dea38973c5abc06eee1fc3573680405e3904fd4c58bfd,PodSandboxId:6083e583ec12bda5c7c1db60a8d04dfa68bb350b58fa6b890daec0131dae61f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728010861088743567,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3df1714-d414-4b36-9919-09dcd9c98407,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b2ac62ed4c916161ebe965c87faf839df998ada0db68f7a75235f7d27f8c77,PodSandboxId:c8d4c2f42b1741e89fc08f875303e7b20c52fe922d27496efd5382fa411129a8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728010258220979975,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-s5f5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 169ab0ba-09fb-4e2e-8945-5f266e30e94c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5ba10b922d9f7570958e24cadd0694b083dca71f89a4af196a8bc33f741bf2d6,PodSandboxId:e2431d325426a4974d7a61e2609ba20aa5f4be0324590715ac0c8de677a057d9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f
9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728010245464997820,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-x4dlq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d2843141-7de1-4235-a92f-146b42810e7a,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998c2093c5a777d35c620e682a64f601f3de09be18b0db7acf9050addded70a3,PodSandboxId:24355213e98207654a7515ad3ff38afa125ddaac9824eb600da28932259245cb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728010245340867807,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9xtsn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7a68c29-6bcf-44a5-b763-06a5c1564187,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b277ba738bfb71628250b79299966e29729f7c928b6b565a54f15ec1bed59c7,PodSandboxId:0f84e6d72a9217443d0848dc0605954da3ce2876386aab2751fbc947a1336944,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728010212052406010,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gqwd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e302061-d82b-4ce2-b712-1faed975bc09,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e36868179711202ec475347ab1b68ddef87d001dad8d7a8343163fd7e7805475,PodSandboxId:1c315f8bd33c60d03d289c3e9be0ba4ad22911d284e7f62f796a22f7f32eb670,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&
ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728010191306532840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3684f708-8dec-41cd-b503-58a74f8f3df3,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fde3be5e7a73c8d3397f8689977a297aacafe17392798a26c4e03f5f106932,PodSandboxId:d
db6930f4ddcceba53c7b558396e67678e379637fda6cc0135e60b8fbeeece61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728010180194531619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2eee80-691d-47ad-98f8-c06185ac9dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b7fed1985f4421f5e6f8a30150b722e9899c6a80b3c56e4f334760b4d51a731,PodSandboxId:94c3a6c8d2150
d0e628678da03ef6b06d35a62e5fcc3e8b96f25df426831092b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728010177265037302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2nft6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010ae061-9933-4fcb-bb73-9c9607bea03e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f3cc713fb4a168b6df4ffcd7057af0478b65edb2a6bb1e98a6ccf04c070dad3,PodSandboxId:6d8fd99c1ac4ba73404693ec6d04fc898e5af9b1162425c587d2928e26683aa7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728010174523721803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sl5bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03727f31-3609-4d9c-ba1d-da91df4ce689,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc78c7278537d65c513bbae8c60fcda039c61c3bdfbf5c3850c0337758bad526,PodSandboxId:4dcc7d42629a67aa6421922eba8b7fac78a019c987f5ff93778b80bc44357849,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728010162871533429,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e8996f305a3968d1f41a37dcaab714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:01dc9a32ed2252be473b6a4ae9f6df2dc8b65d8e0e1961230f1357f0cda4d714,PodSandboxId:880ccf5d995f02005326a0bb9dd0b6fcc8df03d6b9cd832420b50c4543927790,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728010162801324750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21430d03e15a45a1ab18bb07d4ac67d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:ad952c65d22cc95d5aeadd40b7d1fd530cf827d47d8f275a984883f69649b304,PodSandboxId:12a414a780e4d26c78b5163ceec29c6a54b381f80df20b4309014482eb74974b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728010162753992190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fff4526c35266ee7fcdec7c8f648cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a5d8f063
22eea43987c838dfb9b3c0f18ab742d3357eb113609a9784752bf4,PodSandboxId:096fdc10579a15c8c2eddf3947c6b0cbefe973c7d41b1405499cf67ecefd3ce6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728010162706974061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ecef9a7daca0f7be3ebc78f3ff39fb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="ote
l-collector/interceptors.go:74" id=a080b67e-7fb2-4f8f-a5f2-cc985881d127 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.940414067Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be7d4d1e-952e-43c0-b832-6595f05114f3 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.940489394Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be7d4d1e-952e-43c0-b832-6595f05114f3 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.941567056Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8da7c51e-1888-495d-8717-a0745385d628 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.943148005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011001943116627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585094,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8da7c51e-1888-495d-8717-a0745385d628 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.944051926Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63dd7ff9-0de1-4678-ad04-306e6c4f8e72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.944128205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63dd7ff9-0de1-4678-ad04-306e6c4f8e72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.944477833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:105abce822ec642566111ef308ea64f204cf359d5e083b76e3cbd45dfea09c1f,PodSandboxId:6deed65f66ba17a0e483da06dc0240e649915bc317b911ecb9c3d2a227c66639,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728010977911148213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea289386-a580-4a9e-ba94-c28adf57b2a0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f80532df2b54eeca5dea38973c5abc06eee1fc3573680405e3904fd4c58bfd,PodSandboxId:6083e583ec12bda5c7c1db60a8d04dfa68bb350b58fa6b890daec0131dae61f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728010861088743567,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3df1714-d414-4b36-9919-09dcd9c98407,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b2ac62ed4c916161ebe965c87faf839df998ada0db68f7a75235f7d27f8c77,PodSandboxId:c8d4c2f42b1741e89fc08f875303e7b20c52fe922d27496efd5382fa411129a8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728010258220979975,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-s5f5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 169ab0ba-09fb-4e2e-8945-5f266e30e94c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5ba10b922d9f7570958e24cadd0694b083dca71f89a4af196a8bc33f741bf2d6,PodSandboxId:e2431d325426a4974d7a61e2609ba20aa5f4be0324590715ac0c8de677a057d9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f
9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728010245464997820,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-x4dlq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d2843141-7de1-4235-a92f-146b42810e7a,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998c2093c5a777d35c620e682a64f601f3de09be18b0db7acf9050addded70a3,PodSandboxId:24355213e98207654a7515ad3ff38afa125ddaac9824eb600da28932259245cb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728010245340867807,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9xtsn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7a68c29-6bcf-44a5-b763-06a5c1564187,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b277ba738bfb71628250b79299966e29729f7c928b6b565a54f15ec1bed59c7,PodSandboxId:0f84e6d72a9217443d0848dc0605954da3ce2876386aab2751fbc947a1336944,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728010212052406010,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gqwd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e302061-d82b-4ce2-b712-1faed975bc09,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e36868179711202ec475347ab1b68ddef87d001dad8d7a8343163fd7e7805475,PodSandboxId:1c315f8bd33c60d03d289c3e9be0ba4ad22911d284e7f62f796a22f7f32eb670,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&
ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728010191306532840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3684f708-8dec-41cd-b503-58a74f8f3df3,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fde3be5e7a73c8d3397f8689977a297aacafe17392798a26c4e03f5f106932,PodSandboxId:d
db6930f4ddcceba53c7b558396e67678e379637fda6cc0135e60b8fbeeece61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728010180194531619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2eee80-691d-47ad-98f8-c06185ac9dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b7fed1985f4421f5e6f8a30150b722e9899c6a80b3c56e4f334760b4d51a731,PodSandboxId:94c3a6c8d2150
d0e628678da03ef6b06d35a62e5fcc3e8b96f25df426831092b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728010177265037302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2nft6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010ae061-9933-4fcb-bb73-9c9607bea03e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f3cc713fb4a168b6df4ffcd7057af0478b65edb2a6bb1e98a6ccf04c070dad3,PodSandboxId:6d8fd99c1ac4ba73404693ec6d04fc898e5af9b1162425c587d2928e26683aa7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728010174523721803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sl5bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03727f31-3609-4d9c-ba1d-da91df4ce689,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc78c7278537d65c513bbae8c60fcda039c61c3bdfbf5c3850c0337758bad526,PodSandboxId:4dcc7d42629a67aa6421922eba8b7fac78a019c987f5ff93778b80bc44357849,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728010162871533429,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e8996f305a3968d1f41a37dcaab714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:01dc9a32ed2252be473b6a4ae9f6df2dc8b65d8e0e1961230f1357f0cda4d714,PodSandboxId:880ccf5d995f02005326a0bb9dd0b6fcc8df03d6b9cd832420b50c4543927790,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728010162801324750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21430d03e15a45a1ab18bb07d4ac67d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:ad952c65d22cc95d5aeadd40b7d1fd530cf827d47d8f275a984883f69649b304,PodSandboxId:12a414a780e4d26c78b5163ceec29c6a54b381f80df20b4309014482eb74974b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728010162753992190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fff4526c35266ee7fcdec7c8f648cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a5d8f063
22eea43987c838dfb9b3c0f18ab742d3357eb113609a9784752bf4,PodSandboxId:096fdc10579a15c8c2eddf3947c6b0cbefe973c7d41b1405499cf67ecefd3ce6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728010162706974061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ecef9a7daca0f7be3ebc78f3ff39fb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="ote
l-collector/interceptors.go:74" id=63dd7ff9-0de1-4678-ad04-306e6c4f8e72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.981559973Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d24db934-1a10-471d-86a8-a29ae7112c10 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.981630726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d24db934-1a10-471d-86a8-a29ae7112c10 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.982531977Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6dbd085c-200c-4b22-8f9a-4c079f309743 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.984186677Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011001984158218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585094,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6dbd085c-200c-4b22-8f9a-4c079f309743 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.984811386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff2c183e-8bf5-4eef-8770-3bf1b52fb295 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.984871498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff2c183e-8bf5-4eef-8770-3bf1b52fb295 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:03:21 addons-335265 crio[659]: time="2024-10-04 03:03:21.985150205Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:105abce822ec642566111ef308ea64f204cf359d5e083b76e3cbd45dfea09c1f,PodSandboxId:6deed65f66ba17a0e483da06dc0240e649915bc317b911ecb9c3d2a227c66639,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728010977911148213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea289386-a580-4a9e-ba94-c28adf57b2a0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f80532df2b54eeca5dea38973c5abc06eee1fc3573680405e3904fd4c58bfd,PodSandboxId:6083e583ec12bda5c7c1db60a8d04dfa68bb350b58fa6b890daec0131dae61f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728010861088743567,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3df1714-d414-4b36-9919-09dcd9c98407,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b2ac62ed4c916161ebe965c87faf839df998ada0db68f7a75235f7d27f8c77,PodSandboxId:c8d4c2f42b1741e89fc08f875303e7b20c52fe922d27496efd5382fa411129a8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728010258220979975,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-s5f5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 169ab0ba-09fb-4e2e-8945-5f266e30e94c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5ba10b922d9f7570958e24cadd0694b083dca71f89a4af196a8bc33f741bf2d6,PodSandboxId:e2431d325426a4974d7a61e2609ba20aa5f4be0324590715ac0c8de677a057d9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f
9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728010245464997820,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-x4dlq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d2843141-7de1-4235-a92f-146b42810e7a,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998c2093c5a777d35c620e682a64f601f3de09be18b0db7acf9050addded70a3,PodSandboxId:24355213e98207654a7515ad3ff38afa125ddaac9824eb600da28932259245cb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728010245340867807,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9xtsn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7a68c29-6bcf-44a5-b763-06a5c1564187,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b277ba738bfb71628250b79299966e29729f7c928b6b565a54f15ec1bed59c7,PodSandboxId:0f84e6d72a9217443d0848dc0605954da3ce2876386aab2751fbc947a1336944,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728010212052406010,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gqwd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e302061-d82b-4ce2-b712-1faed975bc09,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e36868179711202ec475347ab1b68ddef87d001dad8d7a8343163fd7e7805475,PodSandboxId:1c315f8bd33c60d03d289c3e9be0ba4ad22911d284e7f62f796a22f7f32eb670,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&
ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728010191306532840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3684f708-8dec-41cd-b503-58a74f8f3df3,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fde3be5e7a73c8d3397f8689977a297aacafe17392798a26c4e03f5f106932,PodSandboxId:d
db6930f4ddcceba53c7b558396e67678e379637fda6cc0135e60b8fbeeece61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728010180194531619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2eee80-691d-47ad-98f8-c06185ac9dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b7fed1985f4421f5e6f8a30150b722e9899c6a80b3c56e4f334760b4d51a731,PodSandboxId:94c3a6c8d2150
d0e628678da03ef6b06d35a62e5fcc3e8b96f25df426831092b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728010177265037302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2nft6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010ae061-9933-4fcb-bb73-9c9607bea03e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f3cc713fb4a168b6df4ffcd7057af0478b65edb2a6bb1e98a6ccf04c070dad3,PodSandboxId:6d8fd99c1ac4ba73404693ec6d04fc898e5af9b1162425c587d2928e26683aa7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728010174523721803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sl5bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03727f31-3609-4d9c-ba1d-da91df4ce689,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc78c7278537d65c513bbae8c60fcda039c61c3bdfbf5c3850c0337758bad526,PodSandboxId:4dcc7d42629a67aa6421922eba8b7fac78a019c987f5ff93778b80bc44357849,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728010162871533429,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e8996f305a3968d1f41a37dcaab714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:01dc9a32ed2252be473b6a4ae9f6df2dc8b65d8e0e1961230f1357f0cda4d714,PodSandboxId:880ccf5d995f02005326a0bb9dd0b6fcc8df03d6b9cd832420b50c4543927790,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728010162801324750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21430d03e15a45a1ab18bb07d4ac67d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:ad952c65d22cc95d5aeadd40b7d1fd530cf827d47d8f275a984883f69649b304,PodSandboxId:12a414a780e4d26c78b5163ceec29c6a54b381f80df20b4309014482eb74974b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728010162753992190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fff4526c35266ee7fcdec7c8f648cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a5d8f063
22eea43987c838dfb9b3c0f18ab742d3357eb113609a9784752bf4,PodSandboxId:096fdc10579a15c8c2eddf3947c6b0cbefe973c7d41b1405499cf67ecefd3ce6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728010162706974061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ecef9a7daca0f7be3ebc78f3ff39fb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="ote
l-collector/interceptors.go:74" id=ff2c183e-8bf5-4eef-8770-3bf1b52fb295 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:03:22 addons-335265 crio[659]: time="2024-10-04 03:03:22.021139942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=33ef7ac0-2c2c-4545-8c14-4dab3bf6bf0b name=/runtime.v1.RuntimeService/Version
	Oct 04 03:03:22 addons-335265 crio[659]: time="2024-10-04 03:03:22.021214531Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=33ef7ac0-2c2c-4545-8c14-4dab3bf6bf0b name=/runtime.v1.RuntimeService/Version
	Oct 04 03:03:22 addons-335265 crio[659]: time="2024-10-04 03:03:22.022503416Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34727708-33a1-49f8-9217-ed54895e97a7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:03:22 addons-335265 crio[659]: time="2024-10-04 03:03:22.024750246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011002024710579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585094,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34727708-33a1-49f8-9217-ed54895e97a7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:03:22 addons-335265 crio[659]: time="2024-10-04 03:03:22.029058494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b78f882-0dfb-44cc-9c49-7414f4fef3d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:03:22 addons-335265 crio[659]: time="2024-10-04 03:03:22.029197831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b78f882-0dfb-44cc-9c49-7414f4fef3d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:03:22 addons-335265 crio[659]: time="2024-10-04 03:03:22.029625439Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:105abce822ec642566111ef308ea64f204cf359d5e083b76e3cbd45dfea09c1f,PodSandboxId:6deed65f66ba17a0e483da06dc0240e649915bc317b911ecb9c3d2a227c66639,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728010977911148213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea289386-a580-4a9e-ba94-c28adf57b2a0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f80532df2b54eeca5dea38973c5abc06eee1fc3573680405e3904fd4c58bfd,PodSandboxId:6083e583ec12bda5c7c1db60a8d04dfa68bb350b58fa6b890daec0131dae61f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728010861088743567,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3df1714-d414-4b36-9919-09dcd9c98407,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b2ac62ed4c916161ebe965c87faf839df998ada0db68f7a75235f7d27f8c77,PodSandboxId:c8d4c2f42b1741e89fc08f875303e7b20c52fe922d27496efd5382fa411129a8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728010258220979975,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-s5f5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 169ab0ba-09fb-4e2e-8945-5f266e30e94c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5ba10b922d9f7570958e24cadd0694b083dca71f89a4af196a8bc33f741bf2d6,PodSandboxId:e2431d325426a4974d7a61e2609ba20aa5f4be0324590715ac0c8de677a057d9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f
9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728010245464997820,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-x4dlq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d2843141-7de1-4235-a92f-146b42810e7a,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:998c2093c5a777d35c620e682a64f601f3de09be18b0db7acf9050addded70a3,PodSandboxId:24355213e98207654a7515ad3ff38afa125ddaac9824eb600da28932259245cb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728010245340867807,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9xtsn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7a68c29-6bcf-44a5-b763-06a5c1564187,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b277ba738bfb71628250b79299966e29729f7c928b6b565a54f15ec1bed59c7,PodSandboxId:0f84e6d72a9217443d0848dc0605954da3ce2876386aab2751fbc947a1336944,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728010212052406010,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gqwd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e302061-d82b-4ce2-b712-1faed975bc09,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e36868179711202ec475347ab1b68ddef87d001dad8d7a8343163fd7e7805475,PodSandboxId:1c315f8bd33c60d03d289c3e9be0ba4ad22911d284e7f62f796a22f7f32eb670,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&
ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728010191306532840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3684f708-8dec-41cd-b503-58a74f8f3df3,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fde3be5e7a73c8d3397f8689977a297aacafe17392798a26c4e03f5f106932,PodSandboxId:d
db6930f4ddcceba53c7b558396e67678e379637fda6cc0135e60b8fbeeece61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728010180194531619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2eee80-691d-47ad-98f8-c06185ac9dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b7fed1985f4421f5e6f8a30150b722e9899c6a80b3c56e4f334760b4d51a731,PodSandboxId:94c3a6c8d2150
d0e628678da03ef6b06d35a62e5fcc3e8b96f25df426831092b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728010177265037302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2nft6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010ae061-9933-4fcb-bb73-9c9607bea03e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f3cc713fb4a168b6df4ffcd7057af0478b65edb2a6bb1e98a6ccf04c070dad3,PodSandboxId:6d8fd99c1ac4ba73404693ec6d04fc898e5af9b1162425c587d2928e26683aa7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728010174523721803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sl5bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03727f31-3609-4d9c-ba1d-da91df4ce689,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc78c7278537d65c513bbae8c60fcda039c61c3bdfbf5c3850c0337758bad526,PodSandboxId:4dcc7d42629a67aa6421922eba8b7fac78a019c987f5ff93778b80bc44357849,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728010162871533429,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e8996f305a3968d1f41a37dcaab714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:01dc9a32ed2252be473b6a4ae9f6df2dc8b65d8e0e1961230f1357f0cda4d714,PodSandboxId:880ccf5d995f02005326a0bb9dd0b6fcc8df03d6b9cd832420b50c4543927790,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728010162801324750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21430d03e15a45a1ab18bb07d4ac67d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:ad952c65d22cc95d5aeadd40b7d1fd530cf827d47d8f275a984883f69649b304,PodSandboxId:12a414a780e4d26c78b5163ceec29c6a54b381f80df20b4309014482eb74974b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728010162753992190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fff4526c35266ee7fcdec7c8f648cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a5d8f063
22eea43987c838dfb9b3c0f18ab742d3357eb113609a9784752bf4,PodSandboxId:096fdc10579a15c8c2eddf3947c6b0cbefe973c7d41b1405499cf67ecefd3ce6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728010162706974061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ecef9a7daca0f7be3ebc78f3ff39fb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="ote
l-collector/interceptors.go:74" id=1b78f882-0dfb-44cc-9c49-7414f4fef3d4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	105abce822ec6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          24 seconds ago      Running             busybox                   0                   6deed65f66ba1       busybox
	19f80532df2b5       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   6083e583ec12b       nginx
	89b2ac62ed4c9       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             12 minutes ago      Running             controller                0                   c8d4c2f42b174       ingress-nginx-controller-bc57996ff-s5f5k
	5ba10b922d9f7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              patch                     0                   e2431d325426a       ingress-nginx-admission-patch-x4dlq
	998c2093c5a77       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   24355213e9820       ingress-nginx-admission-create-9xtsn
	2b277ba738bfb       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        13 minutes ago      Running             metrics-server            0                   0f84e6d72a921       metrics-server-84c5f94fbc-gqwd8
	e368681797112       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             13 minutes ago      Running             minikube-ingress-dns      0                   1c315f8bd33c6       kube-ingress-dns-minikube
	70fde3be5e7a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             13 minutes ago      Running             storage-provisioner       0                   ddb6930f4ddcc       storage-provisioner
	6b7fed1985f44       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             13 minutes ago      Running             coredns                   0                   94c3a6c8d2150       coredns-7c65d6cfc9-2nft6
	8f3cc713fb4a1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   6d8fd99c1ac4b       kube-proxy-sl5bg
	fc78c7278537d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   4dcc7d42629a6       etcd-addons-335265
	01dc9a32ed225       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   880ccf5d995f0       kube-apiserver-addons-335265
	ad952c65d22cc       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   12a414a780e4d       kube-scheduler-addons-335265
	a0a5d8f06322e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   096fdc10579a1       kube-controller-manager-addons-335265
	
	
	==> coredns [6b7fed1985f4421f5e6f8a30150b722e9899c6a80b3c56e4f334760b4d51a731] <==
	[INFO] 10.244.0.6:42163 - 53632 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.001290066s
	[INFO] 10.244.0.6:42163 - 17203 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000332092s
	[INFO] 10.244.0.6:42163 - 52000 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000706551s
	[INFO] 10.244.0.6:42163 - 25184 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000144316s
	[INFO] 10.244.0.6:42163 - 1224 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000107646s
	[INFO] 10.244.0.6:42163 - 53163 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000112127s
	[INFO] 10.244.0.6:42163 - 16366 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000096862s
	[INFO] 10.244.0.6:36481 - 54074 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131509s
	[INFO] 10.244.0.6:36481 - 53808 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000085015s
	[INFO] 10.244.0.6:54972 - 14134 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067363s
	[INFO] 10.244.0.6:54972 - 13888 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030884s
	[INFO] 10.244.0.6:34804 - 38688 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050445s
	[INFO] 10.244.0.6:34804 - 38445 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003998s
	[INFO] 10.244.0.6:55372 - 42952 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000056965s
	[INFO] 10.244.0.6:55372 - 42552 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000038515s
	[INFO] 10.244.0.22:50136 - 1114 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000478595s
	[INFO] 10.244.0.22:52273 - 2200 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00010202s
	[INFO] 10.244.0.22:52160 - 36893 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154123s
	[INFO] 10.244.0.22:44776 - 13084 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000085171s
	[INFO] 10.244.0.22:59324 - 20370 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105794s
	[INFO] 10.244.0.22:53977 - 10449 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000059811s
	[INFO] 10.244.0.22:49013 - 59444 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003863365s
	[INFO] 10.244.0.22:50022 - 14333 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.003266749s
	[INFO] 10.244.0.26:43955 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000574937s
	[INFO] 10.244.0.26:55864 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000216958s
	
	
	==> describe nodes <==
	Name:               addons-335265
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-335265
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=addons-335265
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T02_49_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-335265
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 02:49:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-335265
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:03:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:03:03 +0000   Fri, 04 Oct 2024 02:49:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:03:03 +0000   Fri, 04 Oct 2024 02:49:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:03:03 +0000   Fri, 04 Oct 2024 02:49:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:03:03 +0000   Fri, 04 Oct 2024 02:49:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.175
	  Hostname:    addons-335265
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 c63f8ecb0fea4cd4b9fc51defdeb350d
	  System UUID:                c63f8ecb-0fea-4cd4-b9fc-51defdeb350d
	  Boot ID:                    5504ac08-d55b-4b4c-bcaa-04cfbdf152d8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-psznb            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-s5f5k    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-2nft6                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-335265                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-335265                250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-335265       200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-sl5bg                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-335265                100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-gqwd8             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-335265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-335265 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-335265 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m   kubelet          Node addons-335265 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node addons-335265 event: Registered Node addons-335265 in Controller
	
	
	==> dmesg <==
	[  +5.261294] systemd-fstab-generator[1339]: Ignoring "noauto" option for root device
	[  +0.172818] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.053525] kauditd_printk_skb: 103 callbacks suppressed
	[  +5.573977] kauditd_printk_skb: 134 callbacks suppressed
	[  +7.161714] kauditd_printk_skb: 88 callbacks suppressed
	[Oct 4 02:50] kauditd_printk_skb: 4 callbacks suppressed
	[ +17.133021] kauditd_printk_skb: 24 callbacks suppressed
	[ +11.239235] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.444647] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.010088] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.520076] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 4 02:51] kauditd_printk_skb: 16 callbacks suppressed
	[Oct 4 02:52] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 4 03:00] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.933001] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.673254] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.106954] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.143709] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.350438] kauditd_printk_skb: 15 callbacks suppressed
	[ +13.829990] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 4 03:01] kauditd_printk_skb: 11 callbacks suppressed
	[ +17.597462] kauditd_printk_skb: 15 callbacks suppressed
	[ +14.762789] kauditd_printk_skb: 59 callbacks suppressed
	[  +6.665083] kauditd_printk_skb: 26 callbacks suppressed
	[Oct 4 03:02] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [fc78c7278537d65c513bbae8c60fcda039c61c3bdfbf5c3850c0337758bad526] <==
	{"level":"info","ts":"2024-10-04T02:50:59.806812Z","caller":"traceutil/trace.go:171","msg":"trace[2014043898] transaction","detail":"{read_only:false; response_revision:1089; number_of_response:1; }","duration":"439.230091ms","start":"2024-10-04T02:50:59.367564Z","end":"2024-10-04T02:50:59.806794Z","steps":["trace[2014043898] 'process raft request'  (duration: 106.1386ms)","trace[2014043898] 'compare'  (duration: 332.67523ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T02:50:59.806890Z","caller":"traceutil/trace.go:171","msg":"trace[389024767] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1124; }","duration":"372.153659ms","start":"2024-10-04T02:50:59.434726Z","end":"2024-10-04T02:50:59.806880Z","steps":["trace[389024767] 'read index received'  (duration: 38.986378ms)","trace[389024767] 'applied index is now lower than readState.Index'  (duration: 333.166806ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T02:50:59.806978Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:50:59.367547Z","time spent":"439.342426ms","remote":"127.0.0.1:51456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":895,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/ingress-nginx/ingress-nginx-controller\" mod_revision:688 > success:<request_put:<key:\"/registry/services/endpoints/ingress-nginx/ingress-nginx-controller\" value_size:820 >> failure:<request_range:<key:\"/registry/services/endpoints/ingress-nginx/ingress-nginx-controller\" > >"}
	{"level":"info","ts":"2024-10-04T02:50:59.807119Z","caller":"traceutil/trace.go:171","msg":"trace[954465718] transaction","detail":"{read_only:false; response_revision:1090; number_of_response:1; }","duration":"432.24846ms","start":"2024-10-04T02:50:59.374861Z","end":"2024-10-04T02:50:59.807110Z","steps":["trace[954465718] 'process raft request'  (duration: 431.875833ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:50:59.807184Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:50:59.374842Z","time spent":"432.312252ms","remote":"127.0.0.1:51548","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1447,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/endpointslices/ingress-nginx/ingress-nginx-controller-admission-9jckd\" mod_revision:692 > success:<request_put:<key:\"/registry/endpointslices/ingress-nginx/ingress-nginx-controller-admission-9jckd\" value_size:1360 >> failure:<request_range:<key:\"/registry/endpointslices/ingress-nginx/ingress-nginx-controller-admission-9jckd\" > >"}
	{"level":"info","ts":"2024-10-04T02:50:59.807358Z","caller":"traceutil/trace.go:171","msg":"trace[1675267081] transaction","detail":"{read_only:false; response_revision:1091; number_of_response:1; }","duration":"432.422295ms","start":"2024-10-04T02:50:59.374930Z","end":"2024-10-04T02:50:59.807352Z","steps":["trace[1675267081] 'process raft request'  (duration: 431.856105ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:50:59.807412Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:50:59.374924Z","time spent":"432.469956ms","remote":"127.0.0.1:51456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":902,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/ingress-nginx/ingress-nginx-controller-admission\" mod_revision:691 > success:<request_put:<key:\"/registry/services/endpoints/ingress-nginx/ingress-nginx-controller-admission\" value_size:817 >> failure:<request_range:<key:\"/registry/services/endpoints/ingress-nginx/ingress-nginx-controller-admission\" > >"}
	{"level":"info","ts":"2024-10-04T02:50:59.807492Z","caller":"traceutil/trace.go:171","msg":"trace[1764288079] transaction","detail":"{read_only:false; response_revision:1092; number_of_response:1; }","duration":"432.512953ms","start":"2024-10-04T02:50:59.374973Z","end":"2024-10-04T02:50:59.807486Z","steps":["trace[1764288079] 'process raft request'  (duration: 431.879101ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:50:59.807540Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:50:59.374969Z","time spent":"432.553424ms","remote":"127.0.0.1:51548","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1410,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/endpointslices/ingress-nginx/ingress-nginx-controller-qbnmr\" mod_revision:687 > success:<request_put:<key:\"/registry/endpointslices/ingress-nginx/ingress-nginx-controller-qbnmr\" value_size:1333 >> failure:<request_range:<key:\"/registry/endpointslices/ingress-nginx/ingress-nginx-controller-qbnmr\" > >"}
	{"level":"warn","ts":"2024-10-04T02:50:59.807621Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"372.894079ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:50:59.807658Z","caller":"traceutil/trace.go:171","msg":"trace[6222102] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1092; }","duration":"372.930474ms","start":"2024-10-04T02:50:59.434721Z","end":"2024-10-04T02:50:59.807651Z","steps":["trace[6222102] 'agreement among raft nodes before linearized reading'  (duration: 372.880696ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:50:59.807676Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:50:59.434669Z","time spent":"373.002459ms","remote":"127.0.0.1:51462","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-04T02:50:59.807833Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.666992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-10-04T02:50:59.807869Z","caller":"traceutil/trace.go:171","msg":"trace[876086469] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1092; }","duration":"264.725792ms","start":"2024-10-04T02:50:59.543135Z","end":"2024-10-04T02:50:59.807861Z","steps":["trace[876086469] 'agreement among raft nodes before linearized reading'  (duration: 264.619832ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T02:51:37.591112Z","caller":"traceutil/trace.go:171","msg":"trace[1664385201] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"237.766427ms","start":"2024-10-04T02:51:37.353306Z","end":"2024-10-04T02:51:37.591072Z","steps":["trace[1664385201] 'process raft request'  (duration: 237.525802ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T02:51:37.592503Z","caller":"traceutil/trace.go:171","msg":"trace[728180221] linearizableReadLoop","detail":"{readStateIndex:1240; appliedIndex:1239; }","duration":"197.260424ms","start":"2024-10-04T02:51:37.395218Z","end":"2024-10-04T02:51:37.592479Z","steps":["trace[728180221] 'read index received'  (duration: 196.335818ms)","trace[728180221] 'applied index is now lower than readState.Index'  (duration: 924.109µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T02:51:37.592742Z","caller":"traceutil/trace.go:171","msg":"trace[296484496] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"218.828676ms","start":"2024-10-04T02:51:37.373905Z","end":"2024-10-04T02:51:37.592733Z","steps":["trace[296484496] 'process raft request'  (duration: 218.517291ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:51:37.593049Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.798234ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.175\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-10-04T02:51:37.593305Z","caller":"traceutil/trace.go:171","msg":"trace[1842154626] range","detail":"{range_begin:/registry/masterleases/192.168.39.175; range_end:; response_count:1; response_revision:1196; }","duration":"198.075614ms","start":"2024-10-04T02:51:37.395213Z","end":"2024-10-04T02:51:37.593289Z","steps":["trace[1842154626] 'agreement among raft nodes before linearized reading'  (duration: 197.736523ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:51:37.593477Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.59647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/gadget.kinvolk.io/traces/\" range_end:\"/registry/gadget.kinvolk.io/traces0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:51:37.593574Z","caller":"traceutil/trace.go:171","msg":"trace[858644879] range","detail":"{range_begin:/registry/gadget.kinvolk.io/traces/; range_end:/registry/gadget.kinvolk.io/traces0; response_count:0; response_revision:1196; }","duration":"154.705415ms","start":"2024-10-04T02:51:37.438862Z","end":"2024-10-04T02:51:37.593567Z","steps":["trace[858644879] 'agreement among raft nodes before linearized reading'  (duration: 154.584117ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T02:59:23.996296Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1484}
	{"level":"info","ts":"2024-10-04T02:59:24.029616Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1484,"took":"32.158641ms","hash":2557689659,"current-db-size-bytes":5967872,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3010560,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2024-10-04T02:59:24.029691Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2557689659,"revision":1484,"compact-revision":-1}
	{"level":"info","ts":"2024-10-04T03:02:11.292542Z","caller":"traceutil/trace.go:171","msg":"trace[2074514700] transaction","detail":"{read_only:false; response_revision:2622; number_of_response:1; }","duration":"115.006913ms","start":"2024-10-04T03:02:11.177499Z","end":"2024-10-04T03:02:11.292506Z","steps":["trace[2074514700] 'process raft request'  (duration: 114.883696ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:03:22 up 14 min,  0 users,  load average: 0.57, 0.47, 0.45
	Linux addons-335265 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [01dc9a32ed2252be473b6a4ae9f6df2dc8b65d8e0e1961230f1357f0cda4d714] <==
	E1004 02:51:18.381364       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.57.10:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.57.10:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.57.10:443: connect: connection refused" logger="UnhandledError"
	E1004 02:51:18.383239       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.57.10:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.57.10:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.57.10:443: connect: connection refused" logger="UnhandledError"
	E1004 02:51:18.389037       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.57.10:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.57.10:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.57.10:443: connect: connection refused" logger="UnhandledError"
	I1004 02:51:18.476638       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1004 03:00:49.822738       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1004 03:00:50.448115       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1004 03:00:51.045084       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1004 03:00:52.112854       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1004 03:00:56.589319       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1004 03:00:56.771409       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.121.189"}
	I1004 03:01:21.136198       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:01:21.136311       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:01:21.160896       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:01:21.160959       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:01:21.189707       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:01:21.189766       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:01:21.192928       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:01:21.192981       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:01:21.220150       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:01:21.220202       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1004 03:01:22.190102       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1004 03:01:22.222069       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1004 03:01:22.282584       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1004 03:01:35.575771       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.155.54"}
	I1004 03:03:20.859169       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.163.107"}
	
	
	==> kube-controller-manager [a0a5d8f06322eea43987c838dfb9b3c0f18ab742d3357eb113609a9784752bf4] <==
	W1004 03:01:56.782232       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:01:56.782338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1004 03:01:59.287076       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W1004 03:02:00.574244       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:02:00.574959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1004 03:02:02.354784       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-335265"
	W1004 03:02:04.942649       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:02:04.942711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:02:13.671159       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:02:13.671385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:02:32.003094       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:02:32.003180       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:02:35.226747       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:02:35.226914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:02:46.834105       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:02:46.834199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1004 03:03:03.040358       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-335265"
	W1004 03:03:09.537362       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:03:09.537471       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:03:18.158800       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:03:18.158913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1004 03:03:20.679123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.468189ms"
	I1004 03:03:20.696666       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="17.422354ms"
	I1004 03:03:20.696920       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="121.686µs"
	I1004 03:03:20.713370       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.413µs"
	
	
	==> kube-proxy [8f3cc713fb4a168b6df4ffcd7057af0478b65edb2a6bb1e98a6ccf04c070dad3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 02:49:35.459044       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 02:49:35.470353       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.175"]
	E1004 02:49:35.470433       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 02:49:35.567051       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 02:49:35.567095       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 02:49:35.567119       1 server_linux.go:169] "Using iptables Proxier"
	I1004 02:49:35.571219       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 02:49:35.571545       1 server.go:483] "Version info" version="v1.31.1"
	I1004 02:49:35.571576       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 02:49:35.578171       1 config.go:199] "Starting service config controller"
	I1004 02:49:35.578218       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 02:49:35.578314       1 config.go:105] "Starting endpoint slice config controller"
	I1004 02:49:35.578319       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 02:49:35.588871       1 config.go:328] "Starting node config controller"
	I1004 02:49:35.588940       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 02:49:35.679231       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 02:49:35.679339       1 shared_informer.go:320] Caches are synced for service config
	I1004 02:49:35.692342       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ad952c65d22cc95d5aeadd40b7d1fd530cf827d47d8f275a984883f69649b304] <==
	W1004 02:49:26.552910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 02:49:26.553000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.660571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 02:49:26.660658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.698906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 02:49:26.698952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.712346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 02:49:26.712432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.794669       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 02:49:26.794842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.795032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 02:49:26.795093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.899220       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 02:49:26.899320       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1004 02:49:26.930892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 02:49:26.930954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.943962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 02:49:26.944014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.960554       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 02:49:26.961399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.976237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 02:49:26.976332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.999762       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 02:49:26.999902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1004 02:49:29.202807       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:02:28 addons-335265 kubelet[1210]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:02:28 addons-335265 kubelet[1210]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:02:28 addons-335265 kubelet[1210]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:02:28 addons-335265 kubelet[1210]: E1004 03:02:28.873332    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010948872718469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576174,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:28 addons-335265 kubelet[1210]: E1004 03:02:28.873417    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010948872718469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576174,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:31 addons-335265 kubelet[1210]: I1004 03:02:31.335923    1210 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 03:02:31 addons-335265 kubelet[1210]: E1004 03:02:31.337860    1210 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="ea289386-a580-4a9e-ba94-c28adf57b2a0"
	Oct 04 03:02:38 addons-335265 kubelet[1210]: E1004 03:02:38.876678    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010958876160310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576174,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:38 addons-335265 kubelet[1210]: E1004 03:02:38.877002    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010958876160310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576174,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:42 addons-335265 kubelet[1210]: I1004 03:02:42.336730    1210 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 03:02:42 addons-335265 kubelet[1210]: E1004 03:02:42.339076    1210 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="ea289386-a580-4a9e-ba94-c28adf57b2a0"
	Oct 04 03:02:48 addons-335265 kubelet[1210]: E1004 03:02:48.879775    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010968879376267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576174,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:48 addons-335265 kubelet[1210]: E1004 03:02:48.880066    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010968879376267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576174,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:54 addons-335265 kubelet[1210]: I1004 03:02:54.336160    1210 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 03:02:58 addons-335265 kubelet[1210]: I1004 03:02:58.083981    1210 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 03:02:58 addons-335265 kubelet[1210]: E1004 03:02:58.884315    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010978883641301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585094,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:02:58 addons-335265 kubelet[1210]: E1004 03:02:58.884580    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010978883641301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585094,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:08 addons-335265 kubelet[1210]: E1004 03:03:08.886951    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010988886517836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585094,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:08 addons-335265 kubelet[1210]: E1004 03:03:08.887505    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010988886517836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585094,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:18 addons-335265 kubelet[1210]: E1004 03:03:18.891824    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010998890621633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585094,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:18 addons-335265 kubelet[1210]: E1004 03:03:18.891877    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728010998890621633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585094,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:20 addons-335265 kubelet[1210]: I1004 03:03:20.686860    1210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=24.244713956 podStartE2EDuration="11m12.686827924s" podCreationTimestamp="2024-10-04 02:52:08 +0000 UTC" firstStartedPulling="2024-10-04 02:52:09.453095476 +0000 UTC m=+161.278085299" lastFinishedPulling="2024-10-04 03:02:57.895209445 +0000 UTC m=+809.720199267" observedRunningTime="2024-10-04 03:02:58.100173947 +0000 UTC m=+809.925163803" watchObservedRunningTime="2024-10-04 03:03:20.686827924 +0000 UTC m=+832.511817763"
	Oct 04 03:03:20 addons-335265 kubelet[1210]: E1004 03:03:20.687536    1210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2c99064-b337-4b88-a8a3-6d5e45c89d41" containerName="headlamp"
	Oct 04 03:03:20 addons-335265 kubelet[1210]: I1004 03:03:20.687674    1210 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2c99064-b337-4b88-a8a3-6d5e45c89d41" containerName="headlamp"
	Oct 04 03:03:20 addons-335265 kubelet[1210]: I1004 03:03:20.880406    1210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th89x\" (UniqueName: \"kubernetes.io/projected/301f4c8d-964f-4a62-b1f7-a1c5a2ede151-kube-api-access-th89x\") pod \"hello-world-app-55bf9c44b4-psznb\" (UID: \"301f4c8d-964f-4a62-b1f7-a1c5a2ede151\") " pod="default/hello-world-app-55bf9c44b4-psznb"
	
	
	==> storage-provisioner [70fde3be5e7a73c8d3397f8689977a297aacafe17392798a26c4e03f5f106932] <==
	I1004 02:49:40.906666       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 02:49:41.749565       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 02:49:41.749631       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 02:49:42.145909       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 02:49:42.146108       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-335265_44b73c5d-ab93-44e0-a85b-b47a1860d5db!
	I1004 02:49:42.191335       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74e1309f-d5d8-4d08-a932-f554ffb03b94", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-335265_44b73c5d-ab93-44e0-a85b-b47a1860d5db became leader
	I1004 02:49:42.448798       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-335265_44b73c5d-ab93-44e0-a85b-b47a1860d5db!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-335265 -n addons-335265
helpers_test.go:261: (dbg) Run:  kubectl --context addons-335265 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-psznb ingress-nginx-admission-create-9xtsn ingress-nginx-admission-patch-x4dlq
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-335265 describe pod hello-world-app-55bf9c44b4-psznb ingress-nginx-admission-create-9xtsn ingress-nginx-admission-patch-x4dlq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-335265 describe pod hello-world-app-55bf9c44b4-psznb ingress-nginx-admission-create-9xtsn ingress-nginx-admission-patch-x4dlq: exit status 1 (66.657953ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-psznb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-335265/192.168.39.175
	Start Time:       Fri, 04 Oct 2024 03:03:20 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-th89x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-th89x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-psznb to addons-335265
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9xtsn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-x4dlq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-335265 describe pod hello-world-app-55bf9c44b4-psznb ingress-nginx-admission-create-9xtsn ingress-nginx-admission-patch-x4dlq: exit status 1
addons_test.go:990: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-amd64 -p addons-335265 addons disable ingress-dns --alsologtostderr -v=1: (1.134945924s)
addons_test.go:990: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 addons disable ingress --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-amd64 -p addons-335265 addons disable ingress --alsologtostderr -v=1: (7.71990549s)
--- FAIL: TestAddons/parallel/Ingress (155.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (294.43s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:395: metrics-server stabilized in 3.418139ms
addons_test.go:397: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I1004 03:00:20.095617   16879 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1004 03:00:20.095651   16879 kapi.go:107] duration metric: took 8.709617ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "metrics-server-84c5f94fbc-gqwd8" [6e302061-d82b-4ce2-b712-1faed975bc09] Running
addons_test.go:397: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004784371s
addons_test.go:403: (dbg) Run:  kubectl --context addons-335265 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-335265 top pods -n kube-system: exit status 1 (77.765389ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2nft6, age: 10m52.173008384s

                                                
                                                
** /stderr **
I1004 03:00:25.175159   16879 retry.go:31] will retry after 1.82173488s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-335265 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-335265 top pods -n kube-system: exit status 1 (68.824078ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2nft6, age: 10m54.063984048s

                                                
                                                
** /stderr **
I1004 03:00:27.066329   16879 retry.go:31] will retry after 2.505492279s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-335265 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-335265 top pods -n kube-system: exit status 1 (65.124968ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2nft6, age: 10m56.636130123s

                                                
                                                
** /stderr **
I1004 03:00:29.637870   16879 retry.go:31] will retry after 7.10615769s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-335265 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-335265 top pods -n kube-system: exit status 1 (74.090912ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2nft6, age: 11m3.816351366s

                                                
                                                
** /stderr **
I1004 03:00:36.818813   16879 retry.go:31] will retry after 13.074919206s: exit status 1
2024/10/04 03:00:36 [DEBUG] GET http://192.168.39.175:5000
addons_test.go:403: (dbg) Run:  kubectl --context addons-335265 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-335265 top pods -n kube-system: exit status 1 (65.298249ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2nft6, age: 11m16.958476965s

                                                
                                                
** /stderr **
I1004 03:00:49.960058   16879 retry.go:31] will retry after 19.577745227s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-335265 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-335265 top pods -n kube-system: exit status 1 (66.976373ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2nft6, age: 11m36.603530835s

                                                
                                                
** /stderr **
I1004 03:01:09.605319   16879 retry.go:31] will retry after 11.967267721s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-335265 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-335265 top pods -n kube-system: exit status 1 (77.859728ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2nft6, age: 11m48.649757289s

                                                
                                                
** /stderr **
I1004 03:01:21.651527   16879 retry.go:31] will retry after 47.609108598s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-335265 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-335265 top pods -n kube-system: exit status 1 (63.812247ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2nft6, age: 12m36.323749488s

                                                
                                                
** /stderr **
I1004 03:02:09.325414   16879 retry.go:31] will retry after 41.195064639s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-335265 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-335265 top pods -n kube-system: exit status 1 (63.936282ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2nft6, age: 13m17.583119401s

                                                
                                                
** /stderr **
I1004 03:02:50.584811   16879 retry.go:31] will retry after 44.23089293s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-335265 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-335265 top pods -n kube-system: exit status 1 (61.320776ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2nft6, age: 14m1.877482434s

                                                
                                                
** /stderr **
I1004 03:03:34.879268   16879 retry.go:31] will retry after 49.159002382s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-335265 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-335265 top pods -n kube-system: exit status 1 (65.865324ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2nft6, age: 14m51.10426189s

                                                
                                                
** /stderr **
I1004 03:04:24.105914   16879 retry.go:31] will retry after 47.70513073s: exit status 1
addons_test.go:403: (dbg) Run:  kubectl --context addons-335265 top pods -n kube-system
addons_test.go:403: (dbg) Non-zero exit: kubectl --context addons-335265 top pods -n kube-system: exit status 1 (62.960902ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2nft6, age: 15m38.873526929s

                                                
                                                
** /stderr **
addons_test.go:417: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-335265 -n addons-335265
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-335265 logs -n 25: (1.273439483s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-583140                                                                     | download-only-583140 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-774332 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | binary-mirror-774332                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34587                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-774332                                                                     | binary-mirror-774332 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| addons  | disable dashboard -p                                                                        | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | addons-335265                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | addons-335265                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-335265 --wait=true                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:52 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=logviewer                                                                          |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 02:52 UTC | 04 Oct 24 02:52 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-335265 ssh cat                                                                       | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | /opt/local-path-provisioner/pvc-14e1b505-7a2b-48a9-8f30-4f0b19662b44_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-335265 ip                                                                            | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | logviewer --alsologtostderr                                                                 |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-335265 addons                                                                        | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:00 UTC | 04 Oct 24 03:00 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-335265 ssh curl -s                                                                   | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-335265 addons                                                                        | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-335265 addons                                                                        | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-335265 addons                                                                        | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | -p addons-335265                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | -p addons-335265                                                                            |                      |         |         |                     |                     |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:01 UTC | 04 Oct 24 03:01 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-335265 ip                                                                            | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:03 UTC | 04 Oct 24 03:03 UTC |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:03 UTC | 04 Oct 24 03:03 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-335265 addons disable                                                                | addons-335265        | jenkins | v1.34.0 | 04 Oct 24 03:03 UTC | 04 Oct 24 03:03 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 02:48:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:48:42.350397   17586 out.go:345] Setting OutFile to fd 1 ...
	I1004 02:48:42.350509   17586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:48:42.350518   17586 out.go:358] Setting ErrFile to fd 2...
	I1004 02:48:42.350523   17586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:48:42.350678   17586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 02:48:42.351312   17586 out.go:352] Setting JSON to false
	I1004 02:48:42.352109   17586 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1867,"bootTime":1728008255,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 02:48:42.352200   17586 start.go:139] virtualization: kvm guest
	I1004 02:48:42.354280   17586 out.go:177] * [addons-335265] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 02:48:42.355686   17586 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 02:48:42.355690   17586 notify.go:220] Checking for updates...
	I1004 02:48:42.356993   17586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:48:42.358275   17586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 02:48:42.359475   17586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 02:48:42.360643   17586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 02:48:42.361726   17586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 02:48:42.363162   17586 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 02:48:42.396244   17586 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 02:48:42.397409   17586 start.go:297] selected driver: kvm2
	I1004 02:48:42.397422   17586 start.go:901] validating driver "kvm2" against <nil>
	I1004 02:48:42.397433   17586 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 02:48:42.398134   17586 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:48:42.398219   17586 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 02:48:42.413943   17586 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 02:48:42.413998   17586 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 02:48:42.414283   17586 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 02:48:42.414315   17586 cni.go:84] Creating CNI manager for ""
	I1004 02:48:42.414372   17586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:48:42.414386   17586 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 02:48:42.414458   17586 start.go:340] cluster config:
	{Name:addons-335265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-335265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:48:42.414603   17586 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:48:42.416533   17586 out.go:177] * Starting "addons-335265" primary control-plane node in "addons-335265" cluster
	I1004 02:48:42.417803   17586 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:48:42.417858   17586 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 02:48:42.417884   17586 cache.go:56] Caching tarball of preloaded images
	I1004 02:48:42.417982   17586 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 02:48:42.417994   17586 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 02:48:42.418317   17586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/config.json ...
	I1004 02:48:42.418344   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/config.json: {Name:mkd46b476c8343679536647b0d03e29a5f854756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:42.418499   17586 start.go:360] acquireMachinesLock for addons-335265: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 02:48:42.418559   17586 start.go:364] duration metric: took 45.184µs to acquireMachinesLock for "addons-335265"
	I1004 02:48:42.418583   17586 start.go:93] Provisioning new machine with config: &{Name:addons-335265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-335265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:48:42.418655   17586 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 02:48:42.420283   17586 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1004 02:48:42.420438   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:48:42.420479   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:48:42.435142   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33693
	I1004 02:48:42.435633   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:48:42.436190   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:48:42.436214   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:48:42.436553   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:48:42.436738   17586 main.go:141] libmachine: (addons-335265) Calling .GetMachineName
	I1004 02:48:42.436869   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:48:42.437005   17586 start.go:159] libmachine.API.Create for "addons-335265" (driver="kvm2")
	I1004 02:48:42.437034   17586 client.go:168] LocalClient.Create starting
	I1004 02:48:42.437077   17586 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 02:48:42.684563   17586 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 02:48:42.910608   17586 main.go:141] libmachine: Running pre-create checks...
	I1004 02:48:42.910635   17586 main.go:141] libmachine: (addons-335265) Calling .PreCreateCheck
	I1004 02:48:42.911169   17586 main.go:141] libmachine: (addons-335265) Calling .GetConfigRaw
	I1004 02:48:42.911608   17586 main.go:141] libmachine: Creating machine...
	I1004 02:48:42.911622   17586 main.go:141] libmachine: (addons-335265) Calling .Create
	I1004 02:48:42.911773   17586 main.go:141] libmachine: (addons-335265) Creating KVM machine...
	I1004 02:48:42.912946   17586 main.go:141] libmachine: (addons-335265) DBG | found existing default KVM network
	I1004 02:48:42.913626   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:42.913481   17608 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I1004 02:48:42.913645   17586 main.go:141] libmachine: (addons-335265) DBG | created network xml: 
	I1004 02:48:42.913652   17586 main.go:141] libmachine: (addons-335265) DBG | <network>
	I1004 02:48:42.913658   17586 main.go:141] libmachine: (addons-335265) DBG |   <name>mk-addons-335265</name>
	I1004 02:48:42.913665   17586 main.go:141] libmachine: (addons-335265) DBG |   <dns enable='no'/>
	I1004 02:48:42.913670   17586 main.go:141] libmachine: (addons-335265) DBG |   
	I1004 02:48:42.913676   17586 main.go:141] libmachine: (addons-335265) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1004 02:48:42.913681   17586 main.go:141] libmachine: (addons-335265) DBG |     <dhcp>
	I1004 02:48:42.913687   17586 main.go:141] libmachine: (addons-335265) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1004 02:48:42.913693   17586 main.go:141] libmachine: (addons-335265) DBG |     </dhcp>
	I1004 02:48:42.913699   17586 main.go:141] libmachine: (addons-335265) DBG |   </ip>
	I1004 02:48:42.913704   17586 main.go:141] libmachine: (addons-335265) DBG |   
	I1004 02:48:42.913709   17586 main.go:141] libmachine: (addons-335265) DBG | </network>
	I1004 02:48:42.913717   17586 main.go:141] libmachine: (addons-335265) DBG | 
	I1004 02:48:42.919395   17586 main.go:141] libmachine: (addons-335265) DBG | trying to create private KVM network mk-addons-335265 192.168.39.0/24...
	I1004 02:48:42.986986   17586 main.go:141] libmachine: (addons-335265) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265 ...
	I1004 02:48:42.987017   17586 main.go:141] libmachine: (addons-335265) DBG | private KVM network mk-addons-335265 192.168.39.0/24 created
	I1004 02:48:42.987049   17586 main.go:141] libmachine: (addons-335265) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 02:48:42.987063   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:42.986925   17608 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 02:48:42.987085   17586 main.go:141] libmachine: (addons-335265) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 02:48:43.256261   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:43.256128   17608 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa...
	I1004 02:48:43.498782   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:43.498636   17608 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/addons-335265.rawdisk...
	I1004 02:48:43.498813   17586 main.go:141] libmachine: (addons-335265) DBG | Writing magic tar header
	I1004 02:48:43.498823   17586 main.go:141] libmachine: (addons-335265) DBG | Writing SSH key tar header
	I1004 02:48:43.498834   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:43.498759   17608 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265 ...
	I1004 02:48:43.498851   17586 main.go:141] libmachine: (addons-335265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265
	I1004 02:48:43.498906   17586 main.go:141] libmachine: (addons-335265) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265 (perms=drwx------)
	I1004 02:48:43.498928   17586 main.go:141] libmachine: (addons-335265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 02:48:43.498939   17586 main.go:141] libmachine: (addons-335265) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 02:48:43.498948   17586 main.go:141] libmachine: (addons-335265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 02:48:43.498961   17586 main.go:141] libmachine: (addons-335265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 02:48:43.498967   17586 main.go:141] libmachine: (addons-335265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 02:48:43.498972   17586 main.go:141] libmachine: (addons-335265) DBG | Checking permissions on dir: /home/jenkins
	I1004 02:48:43.498977   17586 main.go:141] libmachine: (addons-335265) DBG | Checking permissions on dir: /home
	I1004 02:48:43.498986   17586 main.go:141] libmachine: (addons-335265) DBG | Skipping /home - not owner
	I1004 02:48:43.498997   17586 main.go:141] libmachine: (addons-335265) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 02:48:43.499010   17586 main.go:141] libmachine: (addons-335265) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 02:48:43.499033   17586 main.go:141] libmachine: (addons-335265) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 02:48:43.499048   17586 main.go:141] libmachine: (addons-335265) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 02:48:43.499053   17586 main.go:141] libmachine: (addons-335265) Creating domain...
	I1004 02:48:43.500076   17586 main.go:141] libmachine: (addons-335265) define libvirt domain using xml: 
	I1004 02:48:43.500107   17586 main.go:141] libmachine: (addons-335265) <domain type='kvm'>
	I1004 02:48:43.500156   17586 main.go:141] libmachine: (addons-335265)   <name>addons-335265</name>
	I1004 02:48:43.500184   17586 main.go:141] libmachine: (addons-335265)   <memory unit='MiB'>4000</memory>
	I1004 02:48:43.500193   17586 main.go:141] libmachine: (addons-335265)   <vcpu>2</vcpu>
	I1004 02:48:43.500203   17586 main.go:141] libmachine: (addons-335265)   <features>
	I1004 02:48:43.500235   17586 main.go:141] libmachine: (addons-335265)     <acpi/>
	I1004 02:48:43.500252   17586 main.go:141] libmachine: (addons-335265)     <apic/>
	I1004 02:48:43.500263   17586 main.go:141] libmachine: (addons-335265)     <pae/>
	I1004 02:48:43.500267   17586 main.go:141] libmachine: (addons-335265)     
	I1004 02:48:43.500275   17586 main.go:141] libmachine: (addons-335265)   </features>
	I1004 02:48:43.500279   17586 main.go:141] libmachine: (addons-335265)   <cpu mode='host-passthrough'>
	I1004 02:48:43.500289   17586 main.go:141] libmachine: (addons-335265)   
	I1004 02:48:43.500301   17586 main.go:141] libmachine: (addons-335265)   </cpu>
	I1004 02:48:43.500308   17586 main.go:141] libmachine: (addons-335265)   <os>
	I1004 02:48:43.500312   17586 main.go:141] libmachine: (addons-335265)     <type>hvm</type>
	I1004 02:48:43.500317   17586 main.go:141] libmachine: (addons-335265)     <boot dev='cdrom'/>
	I1004 02:48:43.500324   17586 main.go:141] libmachine: (addons-335265)     <boot dev='hd'/>
	I1004 02:48:43.500329   17586 main.go:141] libmachine: (addons-335265)     <bootmenu enable='no'/>
	I1004 02:48:43.500332   17586 main.go:141] libmachine: (addons-335265)   </os>
	I1004 02:48:43.500338   17586 main.go:141] libmachine: (addons-335265)   <devices>
	I1004 02:48:43.500345   17586 main.go:141] libmachine: (addons-335265)     <disk type='file' device='cdrom'>
	I1004 02:48:43.500353   17586 main.go:141] libmachine: (addons-335265)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/boot2docker.iso'/>
	I1004 02:48:43.500360   17586 main.go:141] libmachine: (addons-335265)       <target dev='hdc' bus='scsi'/>
	I1004 02:48:43.500365   17586 main.go:141] libmachine: (addons-335265)       <readonly/>
	I1004 02:48:43.500372   17586 main.go:141] libmachine: (addons-335265)     </disk>
	I1004 02:48:43.500378   17586 main.go:141] libmachine: (addons-335265)     <disk type='file' device='disk'>
	I1004 02:48:43.500385   17586 main.go:141] libmachine: (addons-335265)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 02:48:43.500393   17586 main.go:141] libmachine: (addons-335265)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/addons-335265.rawdisk'/>
	I1004 02:48:43.500400   17586 main.go:141] libmachine: (addons-335265)       <target dev='hda' bus='virtio'/>
	I1004 02:48:43.500405   17586 main.go:141] libmachine: (addons-335265)     </disk>
	I1004 02:48:43.500409   17586 main.go:141] libmachine: (addons-335265)     <interface type='network'>
	I1004 02:48:43.500417   17586 main.go:141] libmachine: (addons-335265)       <source network='mk-addons-335265'/>
	I1004 02:48:43.500421   17586 main.go:141] libmachine: (addons-335265)       <model type='virtio'/>
	I1004 02:48:43.500426   17586 main.go:141] libmachine: (addons-335265)     </interface>
	I1004 02:48:43.500433   17586 main.go:141] libmachine: (addons-335265)     <interface type='network'>
	I1004 02:48:43.500438   17586 main.go:141] libmachine: (addons-335265)       <source network='default'/>
	I1004 02:48:43.500444   17586 main.go:141] libmachine: (addons-335265)       <model type='virtio'/>
	I1004 02:48:43.500449   17586 main.go:141] libmachine: (addons-335265)     </interface>
	I1004 02:48:43.500454   17586 main.go:141] libmachine: (addons-335265)     <serial type='pty'>
	I1004 02:48:43.500459   17586 main.go:141] libmachine: (addons-335265)       <target port='0'/>
	I1004 02:48:43.500463   17586 main.go:141] libmachine: (addons-335265)     </serial>
	I1004 02:48:43.500471   17586 main.go:141] libmachine: (addons-335265)     <console type='pty'>
	I1004 02:48:43.500481   17586 main.go:141] libmachine: (addons-335265)       <target type='serial' port='0'/>
	I1004 02:48:43.500486   17586 main.go:141] libmachine: (addons-335265)     </console>
	I1004 02:48:43.500492   17586 main.go:141] libmachine: (addons-335265)     <rng model='virtio'>
	I1004 02:48:43.500497   17586 main.go:141] libmachine: (addons-335265)       <backend model='random'>/dev/random</backend>
	I1004 02:48:43.500501   17586 main.go:141] libmachine: (addons-335265)     </rng>
	I1004 02:48:43.500508   17586 main.go:141] libmachine: (addons-335265)     
	I1004 02:48:43.500512   17586 main.go:141] libmachine: (addons-335265)     
	I1004 02:48:43.500517   17586 main.go:141] libmachine: (addons-335265)   </devices>
	I1004 02:48:43.500523   17586 main.go:141] libmachine: (addons-335265) </domain>
	I1004 02:48:43.500529   17586 main.go:141] libmachine: (addons-335265) 
	I1004 02:48:43.506147   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:e4:2e:9f in network default
	I1004 02:48:43.506594   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:43.506612   17586 main.go:141] libmachine: (addons-335265) Ensuring networks are active...
	I1004 02:48:43.507251   17586 main.go:141] libmachine: (addons-335265) Ensuring network default is active
	I1004 02:48:43.507517   17586 main.go:141] libmachine: (addons-335265) Ensuring network mk-addons-335265 is active
	I1004 02:48:43.508000   17586 main.go:141] libmachine: (addons-335265) Getting domain xml...
	I1004 02:48:43.508579   17586 main.go:141] libmachine: (addons-335265) Creating domain...
	I1004 02:48:44.907669   17586 main.go:141] libmachine: (addons-335265) Waiting to get IP...
	I1004 02:48:44.908672   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:44.909073   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:44.909127   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:44.909073   17608 retry.go:31] will retry after 280.008027ms: waiting for machine to come up
	I1004 02:48:45.190666   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:45.191125   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:45.191152   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:45.191075   17608 retry.go:31] will retry after 243.041026ms: waiting for machine to come up
	I1004 02:48:45.435512   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:45.435972   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:45.435998   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:45.435925   17608 retry.go:31] will retry after 422.640633ms: waiting for machine to come up
	I1004 02:48:45.860583   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:45.861101   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:45.861122   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:45.861056   17608 retry.go:31] will retry after 564.471931ms: waiting for machine to come up
	I1004 02:48:46.426875   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:46.427358   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:46.427395   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:46.427309   17608 retry.go:31] will retry after 530.666332ms: waiting for machine to come up
	I1004 02:48:46.960292   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:46.960759   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:46.960789   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:46.960715   17608 retry.go:31] will retry after 764.969096ms: waiting for machine to come up
	I1004 02:48:47.727333   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:47.727828   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:47.727855   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:47.727793   17608 retry.go:31] will retry after 1.186987659s: waiting for machine to come up
	I1004 02:48:48.916278   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:48.916768   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:48.916796   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:48.916723   17608 retry.go:31] will retry after 1.406687575s: waiting for machine to come up
	I1004 02:48:50.325402   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:50.325831   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:50.325860   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:50.325784   17608 retry.go:31] will retry after 1.690401875s: waiting for machine to come up
	I1004 02:48:52.018537   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:52.019077   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:52.019095   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:52.019046   17608 retry.go:31] will retry after 1.543506793s: waiting for machine to come up
	I1004 02:48:53.563909   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:53.564444   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:53.564502   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:53.564420   17608 retry.go:31] will retry after 2.533992227s: waiting for machine to come up
	I1004 02:48:56.100836   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:56.101280   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:56.101303   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:56.101236   17608 retry.go:31] will retry after 2.289001665s: waiting for machine to come up
	I1004 02:48:58.392193   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:48:58.392572   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:48:58.392593   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:48:58.392551   17608 retry.go:31] will retry after 3.362876269s: waiting for machine to come up
	I1004 02:49:01.757665   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:01.758055   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find current IP address of domain addons-335265 in network mk-addons-335265
	I1004 02:49:01.758076   17586 main.go:141] libmachine: (addons-335265) DBG | I1004 02:49:01.758015   17608 retry.go:31] will retry after 5.109433719s: waiting for machine to come up
	I1004 02:49:06.872014   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:06.872375   17586 main.go:141] libmachine: (addons-335265) Found IP for machine: 192.168.39.175
	I1004 02:49:06.872400   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has current primary IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:06.872407   17586 main.go:141] libmachine: (addons-335265) Reserving static IP address...
	I1004 02:49:06.872838   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find host DHCP lease matching {name: "addons-335265", mac: "52:54:00:ce:42:f3", ip: "192.168.39.175"} in network mk-addons-335265
	I1004 02:49:06.945882   17586 main.go:141] libmachine: (addons-335265) DBG | Getting to WaitForSSH function...
	I1004 02:49:06.945919   17586 main.go:141] libmachine: (addons-335265) Reserved static IP address: 192.168.39.175
	I1004 02:49:06.945934   17586 main.go:141] libmachine: (addons-335265) Waiting for SSH to be available...
	I1004 02:49:06.948082   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:06.948305   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265
	I1004 02:49:06.948333   17586 main.go:141] libmachine: (addons-335265) DBG | unable to find defined IP address of network mk-addons-335265 interface with MAC address 52:54:00:ce:42:f3
	I1004 02:49:06.948447   17586 main.go:141] libmachine: (addons-335265) DBG | Using SSH client type: external
	I1004 02:49:06.948475   17586 main.go:141] libmachine: (addons-335265) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa (-rw-------)
	I1004 02:49:06.948510   17586 main.go:141] libmachine: (addons-335265) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 02:49:06.948536   17586 main.go:141] libmachine: (addons-335265) DBG | About to run SSH command:
	I1004 02:49:06.948583   17586 main.go:141] libmachine: (addons-335265) DBG | exit 0
	I1004 02:49:06.958792   17586 main.go:141] libmachine: (addons-335265) DBG | SSH cmd err, output: exit status 255: 
	I1004 02:49:06.958815   17586 main.go:141] libmachine: (addons-335265) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1004 02:49:06.958822   17586 main.go:141] libmachine: (addons-335265) DBG | command : exit 0
	I1004 02:49:06.958827   17586 main.go:141] libmachine: (addons-335265) DBG | err     : exit status 255
	I1004 02:49:06.958834   17586 main.go:141] libmachine: (addons-335265) DBG | output  : 
	I1004 02:49:09.960625   17586 main.go:141] libmachine: (addons-335265) DBG | Getting to WaitForSSH function...
	I1004 02:49:09.963056   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:09.963378   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:09.963405   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:09.963542   17586 main.go:141] libmachine: (addons-335265) DBG | Using SSH client type: external
	I1004 02:49:09.963555   17586 main.go:141] libmachine: (addons-335265) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa (-rw-------)
	I1004 02:49:09.963580   17586 main.go:141] libmachine: (addons-335265) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 02:49:09.963598   17586 main.go:141] libmachine: (addons-335265) DBG | About to run SSH command:
	I1004 02:49:09.963612   17586 main.go:141] libmachine: (addons-335265) DBG | exit 0
	I1004 02:49:10.092290   17586 main.go:141] libmachine: (addons-335265) DBG | SSH cmd err, output: <nil>: 
	I1004 02:49:10.092561   17586 main.go:141] libmachine: (addons-335265) KVM machine creation complete!
	I1004 02:49:10.092892   17586 main.go:141] libmachine: (addons-335265) Calling .GetConfigRaw
	I1004 02:49:10.093446   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:10.093675   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:10.093857   17586 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 02:49:10.093871   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:10.095479   17586 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 02:49:10.095495   17586 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 02:49:10.095502   17586 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 02:49:10.095510   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:10.097826   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.098154   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:10.098188   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.098331   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:10.098549   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.098690   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.098824   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:10.099115   17586 main.go:141] libmachine: Using SSH client type: native
	I1004 02:49:10.099300   17586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1004 02:49:10.099315   17586 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 02:49:10.207072   17586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:49:10.207098   17586 main.go:141] libmachine: Detecting the provisioner...
	I1004 02:49:10.207109   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:10.209769   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.210218   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:10.210240   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.210452   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:10.210710   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.210900   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.211131   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:10.211354   17586 main.go:141] libmachine: Using SSH client type: native
	I1004 02:49:10.211542   17586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1004 02:49:10.211556   17586 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 02:49:10.320576   17586 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 02:49:10.320665   17586 main.go:141] libmachine: found compatible host: buildroot
	I1004 02:49:10.320675   17586 main.go:141] libmachine: Provisioning with buildroot...
	I1004 02:49:10.320682   17586 main.go:141] libmachine: (addons-335265) Calling .GetMachineName
	I1004 02:49:10.320922   17586 buildroot.go:166] provisioning hostname "addons-335265"
	I1004 02:49:10.320941   17586 main.go:141] libmachine: (addons-335265) Calling .GetMachineName
	I1004 02:49:10.321085   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:10.323697   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.324018   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:10.324041   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.324264   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:10.324467   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.324678   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.324791   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:10.324947   17586 main.go:141] libmachine: Using SSH client type: native
	I1004 02:49:10.325104   17586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1004 02:49:10.325115   17586 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-335265 && echo "addons-335265" | sudo tee /etc/hostname
	I1004 02:49:10.450449   17586 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-335265
	
	I1004 02:49:10.450482   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:10.453337   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.453670   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:10.453698   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.453862   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:10.454033   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.454178   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.454281   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:10.454565   17586 main.go:141] libmachine: Using SSH client type: native
	I1004 02:49:10.454749   17586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1004 02:49:10.454771   17586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-335265' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-335265/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-335265' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 02:49:10.572646   17586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:49:10.572678   17586 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 02:49:10.572716   17586 buildroot.go:174] setting up certificates
	I1004 02:49:10.572728   17586 provision.go:84] configureAuth start
	I1004 02:49:10.572737   17586 main.go:141] libmachine: (addons-335265) Calling .GetMachineName
	I1004 02:49:10.572974   17586 main.go:141] libmachine: (addons-335265) Calling .GetIP
	I1004 02:49:10.576042   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.576425   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:10.576456   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.576557   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:10.578465   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.578770   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:10.578800   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.578937   17586 provision.go:143] copyHostCerts
	I1004 02:49:10.579011   17586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 02:49:10.579140   17586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 02:49:10.579215   17586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 02:49:10.579278   17586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.addons-335265 san=[127.0.0.1 192.168.39.175 addons-335265 localhost minikube]
	I1004 02:49:10.877231   17586 provision.go:177] copyRemoteCerts
	I1004 02:49:10.877293   17586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 02:49:10.877318   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:10.880092   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.880505   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:10.880533   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:10.880781   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:10.880973   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:10.881136   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:10.881277   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:10.966818   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 02:49:10.993346   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 02:49:11.018096   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 02:49:11.043905   17586 provision.go:87] duration metric: took 471.164406ms to configureAuth
	I1004 02:49:11.043940   17586 buildroot.go:189] setting minikube options for container-runtime
	I1004 02:49:11.044149   17586 config.go:182] Loaded profile config "addons-335265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 02:49:11.044233   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:11.046930   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.047265   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.047292   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.047424   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:11.047609   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:11.047765   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:11.047895   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:11.048041   17586 main.go:141] libmachine: Using SSH client type: native
	I1004 02:49:11.048218   17586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1004 02:49:11.048238   17586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 02:49:11.294849   17586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 02:49:11.294884   17586 main.go:141] libmachine: Checking connection to Docker...
	I1004 02:49:11.294895   17586 main.go:141] libmachine: (addons-335265) Calling .GetURL
	I1004 02:49:11.296425   17586 main.go:141] libmachine: (addons-335265) DBG | Using libvirt version 6000000
	I1004 02:49:11.298760   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.299055   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.299085   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.299271   17586 main.go:141] libmachine: Docker is up and running!
	I1004 02:49:11.299287   17586 main.go:141] libmachine: Reticulating splines...
	I1004 02:49:11.299297   17586 client.go:171] duration metric: took 28.86225455s to LocalClient.Create
	I1004 02:49:11.299323   17586 start.go:167] duration metric: took 28.862319682s to libmachine.API.Create "addons-335265"
	I1004 02:49:11.299337   17586 start.go:293] postStartSetup for "addons-335265" (driver="kvm2")
	I1004 02:49:11.299352   17586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 02:49:11.299373   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:11.299598   17586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 02:49:11.299620   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:11.301489   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.301799   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.301822   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.302037   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:11.302209   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:11.302372   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:11.302491   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:11.390911   17586 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 02:49:11.395868   17586 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 02:49:11.395891   17586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 02:49:11.395962   17586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 02:49:11.395987   17586 start.go:296] duration metric: took 96.641368ms for postStartSetup
	I1004 02:49:11.396016   17586 main.go:141] libmachine: (addons-335265) Calling .GetConfigRaw
	I1004 02:49:11.396583   17586 main.go:141] libmachine: (addons-335265) Calling .GetIP
	I1004 02:49:11.399152   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.399521   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.399544   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.399771   17586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/config.json ...
	I1004 02:49:11.399985   17586 start.go:128] duration metric: took 28.981318746s to createHost
	I1004 02:49:11.400011   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:11.402487   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.402761   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.402787   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.402955   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:11.403111   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:11.403269   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:11.403500   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:11.403682   17586 main.go:141] libmachine: Using SSH client type: native
	I1004 02:49:11.403897   17586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1004 02:49:11.403913   17586 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 02:49:11.516789   17586 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728010151.472017341
	
	I1004 02:49:11.516825   17586 fix.go:216] guest clock: 1728010151.472017341
	I1004 02:49:11.516839   17586 fix.go:229] Guest: 2024-10-04 02:49:11.472017341 +0000 UTC Remote: 2024-10-04 02:49:11.399997501 +0000 UTC m=+29.083341978 (delta=72.01984ms)
	I1004 02:49:11.516902   17586 fix.go:200] guest clock delta is within tolerance: 72.01984ms
	I1004 02:49:11.516911   17586 start.go:83] releasing machines lock for "addons-335265", held for 29.098338654s
	I1004 02:49:11.516940   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:11.517173   17586 main.go:141] libmachine: (addons-335265) Calling .GetIP
	I1004 02:49:11.519751   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.520075   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.520098   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.520295   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:11.520918   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:11.521064   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:11.521171   17586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 02:49:11.521211   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:11.521344   17586 ssh_runner.go:195] Run: cat /version.json
	I1004 02:49:11.521370   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:11.523881   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.524070   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.524297   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.524333   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.524420   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:11.524432   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:11.524442   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:11.524628   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:11.524666   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:11.524776   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:11.524845   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:11.524914   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:11.524978   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:11.525036   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:11.605338   17586 ssh_runner.go:195] Run: systemctl --version
	I1004 02:49:11.632586   17586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 02:49:11.794711   17586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 02:49:11.800775   17586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 02:49:11.800851   17586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 02:49:11.818575   17586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 02:49:11.818603   17586 start.go:495] detecting cgroup driver to use...
	I1004 02:49:11.818661   17586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 02:49:11.837900   17586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 02:49:11.853499   17586 docker.go:217] disabling cri-docker service (if available) ...
	I1004 02:49:11.853564   17586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 02:49:11.868702   17586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 02:49:11.883720   17586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 02:49:12.006317   17586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 02:49:12.172796   17586 docker.go:233] disabling docker service ...
	I1004 02:49:12.172875   17586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 02:49:12.188267   17586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 02:49:12.201665   17586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 02:49:12.331533   17586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 02:49:12.470603   17586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 02:49:12.485954   17586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 02:49:12.505763   17586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 02:49:12.505829   17586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:49:12.517242   17586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 02:49:12.517299   17586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:49:12.529098   17586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:49:12.540182   17586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:49:12.551326   17586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 02:49:12.562891   17586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:49:12.574005   17586 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:49:12.592358   17586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:49:12.603654   17586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 02:49:12.613663   17586 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 02:49:12.613728   17586 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 02:49:12.626763   17586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 02:49:12.637129   17586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:49:12.757602   17586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 02:49:12.852191   17586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 02:49:12.852270   17586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 02:49:12.856943   17586 start.go:563] Will wait 60s for crictl version
	I1004 02:49:12.857013   17586 ssh_runner.go:195] Run: which crictl
	I1004 02:49:12.860955   17586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 02:49:12.909268   17586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 02:49:12.909397   17586 ssh_runner.go:195] Run: crio --version
	I1004 02:49:12.939138   17586 ssh_runner.go:195] Run: crio --version
	I1004 02:49:12.970565   17586 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 02:49:12.972062   17586 main.go:141] libmachine: (addons-335265) Calling .GetIP
	I1004 02:49:12.974673   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:12.974998   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:12.975046   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:12.975247   17586 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 02:49:12.979596   17586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:49:12.992202   17586 kubeadm.go:883] updating cluster {Name:addons-335265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-335265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 02:49:12.992318   17586 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:49:12.992371   17586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:49:13.024977   17586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 02:49:13.025060   17586 ssh_runner.go:195] Run: which lz4
	I1004 02:49:13.029250   17586 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 02:49:13.033491   17586 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 02:49:13.033523   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 02:49:14.390272   17586 crio.go:462] duration metric: took 1.361058115s to copy over tarball
	I1004 02:49:14.390346   17586 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 02:49:16.621297   17586 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.230905703s)
	I1004 02:49:16.621327   17586 crio.go:469] duration metric: took 2.231020363s to extract the tarball
	I1004 02:49:16.621336   17586 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 02:49:16.657763   17586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:49:16.704887   17586 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 02:49:16.704911   17586 cache_images.go:84] Images are preloaded, skipping loading
	I1004 02:49:16.704924   17586 kubeadm.go:934] updating node { 192.168.39.175 8443 v1.31.1 crio true true} ...
	I1004 02:49:16.705024   17586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-335265 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-335265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 02:49:16.705105   17586 ssh_runner.go:195] Run: crio config
	I1004 02:49:16.752599   17586 cni.go:84] Creating CNI manager for ""
	I1004 02:49:16.752620   17586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:49:16.752629   17586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 02:49:16.752650   17586 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.175 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-335265 NodeName:addons-335265 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 02:49:16.752801   17586 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-335265"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 02:49:16.752893   17586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 02:49:16.763279   17586 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 02:49:16.763338   17586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 02:49:16.773228   17586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1004 02:49:16.791835   17586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 02:49:16.809879   17586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1004 02:49:16.828225   17586 ssh_runner.go:195] Run: grep 192.168.39.175	control-plane.minikube.internal$ /etc/hosts
	I1004 02:49:16.832408   17586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.175	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:49:16.845664   17586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:49:16.960830   17586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 02:49:16.978732   17586 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265 for IP: 192.168.39.175
	I1004 02:49:16.978752   17586 certs.go:194] generating shared ca certs ...
	I1004 02:49:16.978767   17586 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:16.978914   17586 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 02:49:17.255351   17586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt ...
	I1004 02:49:17.255388   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt: {Name:mk416c223763546798382e3c7879793784b195dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.255580   17586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key ...
	I1004 02:49:17.255593   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key: {Name:mk7b03a367acc8df80e5914cf093d4079eeff7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.255667   17586 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 02:49:17.344240   17586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt ...
	I1004 02:49:17.344268   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt: {Name:mk830b345da9508afe57eca6a4e1ca21dba647dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.344450   17586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key ...
	I1004 02:49:17.344468   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key: {Name:mk6d04a07117246d1d3824f24d28d81c1c93d061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.344579   17586 certs.go:256] generating profile certs ...
	I1004 02:49:17.344652   17586 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.key
	I1004 02:49:17.344681   17586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt with IP's: []
	I1004 02:49:17.435135   17586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt ...
	I1004 02:49:17.435170   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: {Name:mk534c4041233364f5de809317ca233dbe4111cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.435342   17586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.key ...
	I1004 02:49:17.435354   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.key: {Name:mk28bb05ec433e3b1aa54e512ad157bcefd823a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.435420   17586 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.key.1a707ed9
	I1004 02:49:17.435438   17586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.crt.1a707ed9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.175]
	I1004 02:49:17.528084   17586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.crt.1a707ed9 ...
	I1004 02:49:17.528115   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.crt.1a707ed9: {Name:mkf136cc463a971160b90826f670648c403a3599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.528280   17586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.key.1a707ed9 ...
	I1004 02:49:17.528295   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.key.1a707ed9: {Name:mkcf6196d67e4c9ec7e9bdd97058b1b2e144b2dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.528364   17586 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.crt.1a707ed9 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.crt
	I1004 02:49:17.528431   17586 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.key.1a707ed9 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.key
	I1004 02:49:17.528475   17586 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.key
	I1004 02:49:17.528491   17586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.crt with IP's: []
	I1004 02:49:17.816145   17586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.crt ...
	I1004 02:49:17.816180   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.crt: {Name:mkac5bc584424a73c1f4ef5cc082ab252c5dec3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.816335   17586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.key ...
	I1004 02:49:17.816346   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.key: {Name:mke8531f4884dc4e8612ff83e9a2c1a996031a54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:17.816519   17586 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 02:49:17.816558   17586 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 02:49:17.816584   17586 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 02:49:17.816608   17586 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 02:49:17.817134   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 02:49:17.849435   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 02:49:17.876023   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 02:49:17.901542   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 02:49:17.927993   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1004 02:49:17.954307   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 02:49:17.980129   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 02:49:18.006769   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 02:49:18.032633   17586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 02:49:18.057710   17586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 02:49:18.074969   17586 ssh_runner.go:195] Run: openssl version
	I1004 02:49:18.081094   17586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 02:49:18.092277   17586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:49:18.097133   17586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:49:18.097209   17586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:49:18.103473   17586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 02:49:18.114795   17586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 02:49:18.119211   17586 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 02:49:18.119274   17586 kubeadm.go:392] StartCluster: {Name:addons-335265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-335265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:49:18.119360   17586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 02:49:18.119407   17586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 02:49:18.155217   17586 cri.go:89] found id: ""
	I1004 02:49:18.155296   17586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 02:49:18.165622   17586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 02:49:18.175899   17586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 02:49:18.186061   17586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 02:49:18.186091   17586 kubeadm.go:157] found existing configuration files:
	
	I1004 02:49:18.186143   17586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 02:49:18.195765   17586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 02:49:18.195835   17586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 02:49:18.205944   17586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 02:49:18.215612   17586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 02:49:18.215687   17586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 02:49:18.225604   17586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 02:49:18.235547   17586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 02:49:18.235615   17586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 02:49:18.246173   17586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 02:49:18.255931   17586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 02:49:18.255990   17586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 02:49:18.266436   17586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 02:49:18.315072   17586 kubeadm.go:310] W1004 02:49:18.269357     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 02:49:18.315763   17586 kubeadm.go:310] W1004 02:49:18.270420     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 02:49:18.446332   17586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:49:29.008798   17586 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 02:49:29.008862   17586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 02:49:29.008964   17586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 02:49:29.009112   17586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 02:49:29.009215   17586 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 02:49:29.009293   17586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 02:49:29.010960   17586 out.go:235]   - Generating certificates and keys ...
	I1004 02:49:29.011038   17586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 02:49:29.011099   17586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 02:49:29.011192   17586 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 02:49:29.011277   17586 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1004 02:49:29.011356   17586 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1004 02:49:29.011412   17586 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1004 02:49:29.011491   17586 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1004 02:49:29.011637   17586 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-335265 localhost] and IPs [192.168.39.175 127.0.0.1 ::1]
	I1004 02:49:29.011713   17586 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1004 02:49:29.011894   17586 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-335265 localhost] and IPs [192.168.39.175 127.0.0.1 ::1]
	I1004 02:49:29.011997   17586 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 02:49:29.012099   17586 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 02:49:29.012186   17586 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1004 02:49:29.012281   17586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 02:49:29.012355   17586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 02:49:29.012435   17586 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 02:49:29.012516   17586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 02:49:29.012602   17586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 02:49:29.012686   17586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 02:49:29.012810   17586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 02:49:29.012895   17586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 02:49:29.014500   17586 out.go:235]   - Booting up control plane ...
	I1004 02:49:29.014611   17586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 02:49:29.014702   17586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 02:49:29.014786   17586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 02:49:29.014898   17586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 02:49:29.015013   17586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 02:49:29.015055   17586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 02:49:29.015187   17586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 02:49:29.015278   17586 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 02:49:29.015355   17586 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.07151ms
	I1004 02:49:29.015426   17586 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 02:49:29.015480   17586 kubeadm.go:310] [api-check] The API server is healthy after 5.50214285s
	I1004 02:49:29.015595   17586 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:49:29.015711   17586 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:49:29.015763   17586 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:49:29.015968   17586 kubeadm.go:310] [mark-control-plane] Marking the node addons-335265 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:49:29.016042   17586 kubeadm.go:310] [bootstrap-token] Using token: nfgnag.mugyjuqzatxni5xt
	I1004 02:49:29.017666   17586 out.go:235]   - Configuring RBAC rules ...
	I1004 02:49:29.017752   17586 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:49:29.017821   17586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:49:29.017932   17586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:49:29.018049   17586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:49:29.018154   17586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:49:29.018248   17586 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:49:29.018352   17586 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:49:29.018392   17586 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 02:49:29.018498   17586 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 02:49:29.018514   17586 kubeadm.go:310] 
	I1004 02:49:29.018596   17586 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 02:49:29.018606   17586 kubeadm.go:310] 
	I1004 02:49:29.018713   17586 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 02:49:29.018727   17586 kubeadm.go:310] 
	I1004 02:49:29.018761   17586 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 02:49:29.018836   17586 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:49:29.018883   17586 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:49:29.018889   17586 kubeadm.go:310] 
	I1004 02:49:29.018937   17586 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 02:49:29.018943   17586 kubeadm.go:310] 
	I1004 02:49:29.018983   17586 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:49:29.018990   17586 kubeadm.go:310] 
	I1004 02:49:29.019041   17586 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 02:49:29.019107   17586 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:49:29.019172   17586 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:49:29.019186   17586 kubeadm.go:310] 
	I1004 02:49:29.019292   17586 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:49:29.019355   17586 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 02:49:29.019362   17586 kubeadm.go:310] 
	I1004 02:49:29.019429   17586 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nfgnag.mugyjuqzatxni5xt \
	I1004 02:49:29.019511   17586 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 \
	I1004 02:49:29.019542   17586 kubeadm.go:310] 	--control-plane 
	I1004 02:49:29.019550   17586 kubeadm.go:310] 
	I1004 02:49:29.019625   17586 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:49:29.019633   17586 kubeadm.go:310] 
	I1004 02:49:29.019708   17586 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nfgnag.mugyjuqzatxni5xt \
	I1004 02:49:29.019872   17586 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 
	I1004 02:49:29.019890   17586 cni.go:84] Creating CNI manager for ""
	I1004 02:49:29.019899   17586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:49:29.021342   17586 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 02:49:29.022515   17586 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 02:49:29.034532   17586 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 02:49:29.055639   17586 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 02:49:29.055771   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:29.055820   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-335265 minikube.k8s.io/updated_at=2024_10_04T02_49_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=addons-335265 minikube.k8s.io/primary=true
	I1004 02:49:29.089558   17586 ops.go:34] apiserver oom_adj: -16
	I1004 02:49:29.211558   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:29.712413   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:30.211932   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:30.711890   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:31.212436   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:31.711915   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:32.212472   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:32.712083   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:33.211723   17586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:49:33.311027   17586 kubeadm.go:1113] duration metric: took 4.255321603s to wait for elevateKubeSystemPrivileges
	I1004 02:49:33.311068   17586 kubeadm.go:394] duration metric: took 15.191797173s to StartCluster
	I1004 02:49:33.311091   17586 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:33.311227   17586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 02:49:33.311629   17586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:49:33.311880   17586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 02:49:33.311897   17586 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:49:33.311956   17586 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:true metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1004 02:49:33.312081   17586 addons.go:69] Setting ingress=true in profile "addons-335265"
	I1004 02:49:33.312098   17586 addons.go:69] Setting yakd=true in profile "addons-335265"
	I1004 02:49:33.312115   17586 addons.go:234] Setting addon ingress=true in "addons-335265"
	I1004 02:49:33.312120   17586 config.go:182] Loaded profile config "addons-335265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 02:49:33.312130   17586 addons.go:234] Setting addon yakd=true in "addons-335265"
	I1004 02:49:33.312135   17586 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-335265"
	I1004 02:49:33.312152   17586 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-335265"
	I1004 02:49:33.312161   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312169   17586 addons.go:69] Setting inspektor-gadget=true in profile "addons-335265"
	I1004 02:49:33.312172   17586 addons.go:69] Setting default-storageclass=true in profile "addons-335265"
	I1004 02:49:33.312181   17586 addons.go:234] Setting addon inspektor-gadget=true in "addons-335265"
	I1004 02:49:33.312186   17586 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-335265"
	I1004 02:49:33.312173   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312204   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312226   17586 addons.go:69] Setting ingress-dns=true in profile "addons-335265"
	I1004 02:49:33.312220   17586 addons.go:69] Setting cloud-spanner=true in profile "addons-335265"
	I1004 02:49:33.312245   17586 addons.go:234] Setting addon ingress-dns=true in "addons-335265"
	I1004 02:49:33.312268   17586 addons.go:234] Setting addon cloud-spanner=true in "addons-335265"
	I1004 02:49:33.312288   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312304   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312623   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.312663   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.312668   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.312693   17586 addons.go:69] Setting logviewer=true in profile "addons-335265"
	I1004 02:49:33.312727   17586 addons.go:69] Setting registry=true in profile "addons-335265"
	I1004 02:49:33.312733   17586 addons.go:69] Setting volumesnapshots=true in profile "addons-335265"
	I1004 02:49:33.312737   17586 addons.go:69] Setting storage-provisioner=true in profile "addons-335265"
	I1004 02:49:33.312743   17586 addons.go:234] Setting addon registry=true in "addons-335265"
	I1004 02:49:33.312751   17586 addons.go:234] Setting addon storage-provisioner=true in "addons-335265"
	I1004 02:49:33.312751   17586 addons.go:69] Setting volcano=true in profile "addons-335265"
	I1004 02:49:33.312756   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.312766   17586 addons.go:234] Setting addon volcano=true in "addons-335265"
	I1004 02:49:33.312699   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.312772   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312786   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312799   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.312841   17586 addons.go:234] Setting addon logviewer=true in "addons-335265"
	I1004 02:49:33.312874   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312967   17586 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-335265"
	I1004 02:49:33.313066   17586 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-335265"
	I1004 02:49:33.313111   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312161   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.313154   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.312766   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.313195   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.313239   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.313262   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.313479   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.312737   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.313511   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.312744   17586 addons.go:234] Setting addon volumesnapshots=true in "addons-335265"
	I1004 02:49:33.313111   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.313567   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.313577   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.313595   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.313602   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.312709   17586 addons.go:69] Setting gcp-auth=true in profile "addons-335265"
	I1004 02:49:33.312720   17586 addons.go:69] Setting metrics-server=true in profile "addons-335265"
	I1004 02:49:33.312719   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.313644   17586 addons.go:234] Setting addon metrics-server=true in "addons-335265"
	I1004 02:49:33.313653   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.313579   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.313700   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.313723   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.312716   17586 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-335265"
	I1004 02:49:33.313776   17586 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-335265"
	I1004 02:49:33.313821   17586 out.go:177] * Verifying Kubernetes components...
	I1004 02:49:33.313943   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.314116   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.314127   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.314143   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.314150   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.314345   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.314371   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.312700   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.314423   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.315292   17586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:49:33.313633   17586 mustload.go:65] Loading cluster: addons-335265
	I1004 02:49:33.348335   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I1004 02:49:33.351915   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
	I1004 02:49:33.351934   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44575
	I1004 02:49:33.352107   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I1004 02:49:33.352210   17586 config.go:182] Loaded profile config "addons-335265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 02:49:33.352302   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I1004 02:49:33.352718   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.352766   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.360638   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.360674   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.360722   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44043
	I1004 02:49:33.360742   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.360674   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.360781   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.361457   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.361482   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.361630   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.361646   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.361673   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.361680   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.361727   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.361829   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.361841   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.362018   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.362230   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.362249   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.362262   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.362433   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.362458   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.362521   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.362577   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.362627   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.362666   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.362725   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.362763   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.363519   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.363562   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.364709   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.364742   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.365106   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.365600   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.365632   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.368193   17586 addons.go:234] Setting addon default-storageclass=true in "addons-335265"
	I1004 02:49:33.368236   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.368662   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.368699   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.375574   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34451
	I1004 02:49:33.375593   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40171
	I1004 02:49:33.376334   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.376459   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.377291   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.377324   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.378059   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.378104   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.378345   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.378527   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.393762   17586 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-335265"
	I1004 02:49:33.393816   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.394039   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.394059   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I1004 02:49:33.394077   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41233
	I1004 02:49:33.394099   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I1004 02:49:33.394154   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.394190   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.394063   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.394505   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.394701   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.394780   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.395357   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.395395   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.395440   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.395455   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.395592   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.395612   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.396004   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.396007   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.396050   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.396499   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.396532   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.396585   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.396629   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.396760   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I1004 02:49:33.397994   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.398165   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.398178   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.398676   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.398692   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.399138   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.399199   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33809
	I1004 02:49:33.399922   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.399958   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.400187   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.400882   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.401397   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.401435   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.401815   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1004 02:49:33.402308   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.402324   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.402719   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.403261   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.403545   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.403562   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.403994   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.404029   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.404052   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.404684   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.404734   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.406031   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I1004 02:49:33.406510   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.406918   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.406942   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.407324   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.407506   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.409305   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.411570   17586 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1004 02:49:33.413027   17586 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1004 02:49:33.413046   17586 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1004 02:49:33.413067   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.416453   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.416573   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.416595   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.416864   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.417062   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.417217   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.417355   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.417904   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45481
	I1004 02:49:33.420406   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.421823   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.421854   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.421955   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43065
	I1004 02:49:33.422443   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.422967   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.422984   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.423330   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.423937   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.423975   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.424175   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35797
	I1004 02:49:33.424765   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.425184   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.425201   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.425495   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.426007   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.426042   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.427991   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I1004 02:49:33.428167   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.428348   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.428432   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.430243   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.430954   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.430977   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.431506   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.431661   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.432396   17586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1004 02:49:33.432661   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I1004 02:49:33.433301   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.433496   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.434065   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.434090   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.434519   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.434784   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.434911   17586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:49:33.435036   17586 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 02:49:33.436420   17586 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:49:33.436444   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 02:49:33.436462   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.436419   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:33.436888   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.436925   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.438326   17586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:49:33.439928   17586 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1004 02:49:33.439944   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1004 02:49:33.439959   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.440476   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.441921   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.442020   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.442664   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.442861   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.442965   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.443052   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.443709   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.444144   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.444171   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.444408   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.444581   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.444720   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.444854   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.447107   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
	I1004 02:49:33.447800   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.448406   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.448425   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.448830   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.449601   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.450352   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44265
	I1004 02:49:33.450967   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.451635   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.451652   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.451794   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.452580   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.452935   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.453729   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41531
	I1004 02:49:33.453844   17586 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1004 02:49:33.454185   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.454710   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.454727   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.455241   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.455366   17586 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1004 02:49:33.455383   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1004 02:49:33.455402   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.455620   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.457150   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I1004 02:49:33.457451   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.458078   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.458591   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.458624   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.459322   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1004 02:49:33.459424   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.459475   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33145
	I1004 02:49:33.459534   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.459739   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:33.459755   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:33.460006   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:33.460028   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:33.460037   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:33.460048   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:33.460339   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:33.460352   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	W1004 02:49:33.460448   17586 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1004 02:49:33.461650   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I1004 02:49:33.461659   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.461664   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I1004 02:49:33.461773   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1004 02:49:33.461815   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.462113   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.462211   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.462367   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.462387   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.462660   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.462678   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.462966   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.462981   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.463588   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.463618   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.463652   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.463705   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44895
	I1004 02:49:33.463919   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.464174   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.464236   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.464288   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.464442   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.464578   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1004 02:49:33.464595   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.464743   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.465110   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45621
	I1004 02:49:33.465143   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.465185   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.465500   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.465812   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.465989   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.466355   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.466223   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I1004 02:49:33.466732   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.466771   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.466776   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.466972   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.467057   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.467253   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.467268   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.467817   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1004 02:49:33.467847   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.468404   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.468509   17586 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1004 02:49:33.468613   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35309
	I1004 02:49:33.468759   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.468860   17586 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1004 02:49:33.469022   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.469309   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.469730   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.469751   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.469885   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.469945   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.470052   17586 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 02:49:33.470056   17586 out.go:177]   - Using image docker.io/ivans3/minikube-log-viewer:v1
	I1004 02:49:33.470068   17586 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 02:49:33.470160   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.470201   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.470052   17586 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1004 02:49:33.470222   17586 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1004 02:49:33.470238   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.470275   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.470286   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.470890   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.470948   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.471284   17586 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1004 02:49:33.471744   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.471360   17586 addons.go:431] installing /etc/kubernetes/addons/logviewer-dp-and-svc.yaml
	I1004 02:49:33.471804   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/logviewer-dp-and-svc.yaml (2016 bytes)
	I1004 02:49:33.471818   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.471580   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:33.472058   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:33.472255   17586 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1004 02:49:33.472422   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1004 02:49:33.473242   17586 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1004 02:49:33.473262   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1004 02:49:33.473279   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.473999   17586 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1004 02:49:33.474816   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.475094   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39363
	I1004 02:49:33.475296   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37935
	I1004 02:49:33.475402   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1004 02:49:33.475432   17586 out.go:177]   - Using image docker.io/registry:2.8.3
	I1004 02:49:33.475837   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.475532   17586 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1004 02:49:33.475932   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1004 02:49:33.475950   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.476012   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.476494   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1004 02:49:33.476498   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.477053   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.477088   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.477124   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.477471   17586 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1004 02:49:33.477491   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1004 02:49:33.477508   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.477893   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.477510   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.477714   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.478034   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.478051   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.477860   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.478078   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.478306   17586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1004 02:49:33.478323   17586 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1004 02:49:33.478340   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.478518   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.478523   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.478662   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.478677   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.478900   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.478934   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.479080   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.479144   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.479271   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.479308   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.479432   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.479726   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.479894   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.479914   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.480047   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1004 02:49:33.480417   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.480583   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.480708   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.480857   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.481054   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.481263   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.481422   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.481722   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.482103   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.482130   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.482288   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.482479   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.482590   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.482686   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.482919   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.483028   17586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1004 02:49:33.483242   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.483261   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.483384   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.483529   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.483662   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.483797   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.484040   17586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1004 02:49:33.484055   17586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1004 02:49:33.484070   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.484109   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.484121   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.484689   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.484745   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.484768   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.484784   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.484810   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.485085   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.485290   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.485398   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.485540   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.486093   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.486346   17586 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 02:49:33.486358   17586 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 02:49:33.486380   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.489049   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.490222   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.490263   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.490317   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.490342   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.490639   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.490668   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.490673   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.490840   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.490863   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.490982   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.491007   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.491130   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.491247   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.502831   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32895
	I1004 02:49:33.503255   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:33.503724   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:33.503743   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:33.504112   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:33.504279   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:33.506069   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:33.508108   17586 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1004 02:49:33.509789   17586 out.go:177]   - Using image docker.io/busybox:stable
	I1004 02:49:33.511272   17586 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1004 02:49:33.511290   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1004 02:49:33.511309   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:33.514611   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.515039   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:33.515068   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:33.515211   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:33.515342   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:33.515492   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:33.515706   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:33.886224   17586 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1004 02:49:33.886248   17586 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1004 02:49:33.896054   17586 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1004 02:49:33.896080   17586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1004 02:49:33.901319   17586 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 02:49:33.901339   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1004 02:49:33.945893   17586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 02:49:33.946344   17586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 02:49:33.948133   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1004 02:49:33.972688   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 02:49:33.987719   17586 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 02:49:33.987749   17586 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 02:49:33.990078   17586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1004 02:49:33.990103   17586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1004 02:49:34.005444   17586 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1004 02:49:34.005465   17586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1004 02:49:34.014318   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1004 02:49:34.030068   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:49:34.057095   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1004 02:49:34.085700   17586 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1004 02:49:34.085726   17586 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1004 02:49:34.090870   17586 addons.go:431] installing /etc/kubernetes/addons/logviewer-rbac.yaml
	I1004 02:49:34.090888   17586 ssh_runner.go:362] scp logviewer/logviewer-rbac.yaml --> /etc/kubernetes/addons/logviewer-rbac.yaml (1064 bytes)
	I1004 02:49:34.106271   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1004 02:49:34.111121   17586 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1004 02:49:34.111146   17586 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1004 02:49:34.112787   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1004 02:49:34.115049   17586 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1004 02:49:34.115067   17586 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1004 02:49:34.175071   17586 node_ready.go:35] waiting up to 6m0s for node "addons-335265" to be "Ready" ...
	I1004 02:49:34.181373   17586 node_ready.go:49] node "addons-335265" has status "Ready":"True"
	I1004 02:49:34.181409   17586 node_ready.go:38] duration metric: took 6.298512ms for node "addons-335265" to be "Ready" ...
	I1004 02:49:34.181421   17586 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:49:34.191489   17586 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2nft6" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:34.231218   17586 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1004 02:49:34.231250   17586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1004 02:49:34.245776   17586 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:49:34.245808   17586 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 02:49:34.261310   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/logviewer-dp-and-svc.yaml -f /etc/kubernetes/addons/logviewer-rbac.yaml
	I1004 02:49:34.266569   17586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1004 02:49:34.266596   17586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1004 02:49:34.330346   17586 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1004 02:49:34.330376   17586 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1004 02:49:34.345074   17586 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1004 02:49:34.345103   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1004 02:49:34.453371   17586 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1004 02:49:34.453405   17586 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1004 02:49:34.468098   17586 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1004 02:49:34.468131   17586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1004 02:49:34.493561   17586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1004 02:49:34.493586   17586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1004 02:49:34.497625   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:49:34.598033   17586 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1004 02:49:34.598065   17586 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1004 02:49:34.604066   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1004 02:49:34.719646   17586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1004 02:49:34.719672   17586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1004 02:49:34.720247   17586 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1004 02:49:34.720262   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1004 02:49:34.757085   17586 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1004 02:49:34.757106   17586 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1004 02:49:34.791069   17586 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1004 02:49:34.791096   17586 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1004 02:49:34.971059   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1004 02:49:34.994816   17586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1004 02:49:34.994840   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1004 02:49:35.065592   17586 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:49:35.065612   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1004 02:49:35.147113   17586 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1004 02:49:35.147136   17586 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1004 02:49:35.406831   17586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1004 02:49:35.406862   17586 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1004 02:49:35.528200   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:49:35.584032   17586 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1004 02:49:35.584062   17586 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1004 02:49:35.748491   17586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1004 02:49:35.748511   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1004 02:49:35.890310   17586 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1004 02:49:35.890337   17586 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1004 02:49:36.077004   17586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1004 02:49:36.077032   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1004 02:49:36.146287   17586 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1004 02:49:36.146308   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1004 02:49:36.200452   17586 pod_ready.go:103] pod "coredns-7c65d6cfc9-2nft6" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:36.277013   17586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1004 02:49:36.277037   17586 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1004 02:49:36.430708   17586 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.484328726s)
	I1004 02:49:36.430738   17586 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1004 02:49:36.510247   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1004 02:49:36.552653   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1004 02:49:36.950073   17586 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-335265" context rescaled to 1 replicas
	I1004 02:49:38.239191   17586 pod_ready.go:103] pod "coredns-7c65d6cfc9-2nft6" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:40.493877   17586 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1004 02:49:40.493919   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:40.496548   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:40.496948   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:40.496979   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:40.497135   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:40.497363   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:40.497533   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:40.497749   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:40.730094   17586 pod_ready.go:103] pod "coredns-7c65d6cfc9-2nft6" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:40.914225   17586 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1004 02:49:41.190464   17586 addons.go:234] Setting addon gcp-auth=true in "addons-335265"
	I1004 02:49:41.190518   17586 host.go:66] Checking if "addons-335265" exists ...
	I1004 02:49:41.190858   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:41.190896   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:41.206666   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
	I1004 02:49:41.207572   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:41.208149   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:41.208172   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:41.208495   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:41.209020   17586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 02:49:41.209049   17586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:49:41.224402   17586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42151
	I1004 02:49:41.224876   17586 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:49:41.225406   17586 main.go:141] libmachine: Using API Version  1
	I1004 02:49:41.225431   17586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:49:41.225703   17586 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:49:41.225862   17586 main.go:141] libmachine: (addons-335265) Calling .GetState
	I1004 02:49:41.227358   17586 main.go:141] libmachine: (addons-335265) Calling .DriverName
	I1004 02:49:41.227551   17586 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1004 02:49:41.227575   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHHostname
	I1004 02:49:41.230940   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:41.231413   17586 main.go:141] libmachine: (addons-335265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:42:f3", ip: ""} in network mk-addons-335265: {Iface:virbr1 ExpiryTime:2024-10-04 03:48:58 +0000 UTC Type:0 Mac:52:54:00:ce:42:f3 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:addons-335265 Clientid:01:52:54:00:ce:42:f3}
	I1004 02:49:41.231443   17586 main.go:141] libmachine: (addons-335265) DBG | domain addons-335265 has defined IP address 192.168.39.175 and MAC address 52:54:00:ce:42:f3 in network mk-addons-335265
	I1004 02:49:41.231597   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHPort
	I1004 02:49:41.231765   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHKeyPath
	I1004 02:49:41.231910   17586 main.go:141] libmachine: (addons-335265) Calling .GetSSHUsername
	I1004 02:49:41.232029   17586 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/addons-335265/id_rsa Username:docker}
	I1004 02:49:41.720463   17586 pod_ready.go:93] pod "coredns-7c65d6cfc9-2nft6" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.720489   17586 pod_ready.go:82] duration metric: took 7.528971184s for pod "coredns-7c65d6cfc9-2nft6" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.720501   17586 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vms49" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.754679   17586 pod_ready.go:93] pod "coredns-7c65d6cfc9-vms49" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.754705   17586 pod_ready.go:82] duration metric: took 34.19619ms for pod "coredns-7c65d6cfc9-vms49" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.754718   17586 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.785086   17586 pod_ready.go:93] pod "etcd-addons-335265" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.785108   17586 pod_ready.go:82] duration metric: took 30.383054ms for pod "etcd-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.785119   17586 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.799731   17586 pod_ready.go:93] pod "kube-apiserver-addons-335265" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:41.799756   17586 pod_ready.go:82] duration metric: took 14.628834ms for pod "kube-apiserver-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:41.799769   17586 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:42.322999   17586 pod_ready.go:93] pod "kube-controller-manager-addons-335265" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:42.323021   17586 pod_ready.go:82] duration metric: took 523.243938ms for pod "kube-controller-manager-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:42.323036   17586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sl5bg" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:42.539515   17586 pod_ready.go:93] pod "kube-proxy-sl5bg" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:42.539561   17586 pod_ready.go:82] duration metric: took 216.497077ms for pod "kube-proxy-sl5bg" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:42.539573   17586 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:42.919968   17586 pod_ready.go:93] pod "kube-scheduler-addons-335265" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:42.919990   17586 pod_ready.go:82] duration metric: took 380.410368ms for pod "kube-scheduler-addons-335265" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:42.919997   17586 pod_ready.go:39] duration metric: took 8.738564467s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:49:42.920012   17586 api_server.go:52] waiting for apiserver process to appear ...
	I1004 02:49:42.920058   17586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 02:49:42.953040   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.004872963s)
	I1004 02:49:42.953058   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.980343573s)
	I1004 02:49:42.953090   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953101   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953114   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.938767641s)
	I1004 02:49:42.953129   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953138   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953164   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953179   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953201   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.923099395s)
	I1004 02:49:42.953237   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953265   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953298   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.896169977s)
	I1004 02:49:42.953337   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953348   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953359   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.953372   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.953379   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953386   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953433   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.953444   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.953444   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.847143597s)
	I1004 02:49:42.953453   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953462   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953491   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.840686228s)
	I1004 02:49:42.953494   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.953461   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953508   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953512   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953518   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953537   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.953444   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.953541   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.953546   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.953549   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.953555   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953557   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953562   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953564   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953589   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/logviewer-dp-and-svc.yaml -f /etc/kubernetes/addons/logviewer-rbac.yaml: (8.69225589s)
	I1004 02:49:42.953603   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953611   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953673   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.456016985s)
	I1004 02:49:42.953691   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953700   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.953933   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.349836434s)
	I1004 02:49:42.953953   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.953966   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.954037   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.982953657s)
	I1004 02:49:42.954056   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.954064   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.954183   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.425954315s)
	W1004 02:49:42.954208   17586 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1004 02:49:42.954237   17586 retry.go:31] will retry after 203.667463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1004 02:49:42.954346   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.444061841s)
	I1004 02:49:42.954363   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.954371   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.957760   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.957761   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.957796   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.957803   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.957808   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.957811   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.957819   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.957826   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.957866   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.957875   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.957881   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.957889   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.957894   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.957896   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.957920   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.957927   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.957931   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.957934   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.957938   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.957941   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.957945   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.957952   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.957987   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958000   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958007   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958013   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958020   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958027   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958049   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.958056   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.958104   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958128   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958134   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958141   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.958147   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.958203   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958212   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958219   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.958225   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.958267   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958286   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958292   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958301   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:42.958307   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:42.958342   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958360   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958366   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958430   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958448   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958456   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958465   17586 addons.go:475] Verifying addon ingress=true in "addons-335265"
	I1004 02:49:42.958688   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958709   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958731   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958737   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958782   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958802   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958810   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.958818   17586 addons.go:475] Verifying addon registry=true in "addons-335265"
	I1004 02:49:42.958901   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.958927   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.958933   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.959154   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.959177   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.960610   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.959193   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.959210   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.960667   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.959233   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.960702   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.959336   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.959492   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.960908   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.960986   17586 out.go:177] * Verifying registry addon...
	I1004 02:49:42.959762   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:42.959801   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:42.961043   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:42.961067   17586 addons.go:475] Verifying addon metrics-server=true in "addons-335265"
	I1004 02:49:42.961097   17586 out.go:177] * Verifying ingress addon...
	I1004 02:49:42.962070   17586 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-335265 service yakd-dashboard -n yakd-dashboard
	
	I1004 02:49:42.963864   17586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1004 02:49:42.963868   17586 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1004 02:49:42.974966   17586 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1004 02:49:42.974999   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:42.979384   17586 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1004 02:49:42.979401   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:43.000454   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:43.000477   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:43.000728   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:43.000779   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:43.000793   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	W1004 02:49:43.000896   17586 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1004 02:49:43.012240   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:43.012257   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:43.012490   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:43.012508   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:43.158991   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:49:43.473236   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:43.473417   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:43.995974   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:43.996523   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:44.330915   17586 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.103343121s)
	I1004 02:49:44.330944   17586 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.410869693s)
	I1004 02:49:44.330967   17586 api_server.go:72] duration metric: took 11.019046059s to wait for apiserver process to appear ...
	I1004 02:49:44.330975   17586 api_server.go:88] waiting for apiserver healthz status ...
	I1004 02:49:44.330996   17586 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1004 02:49:44.330911   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.778208695s)
	I1004 02:49:44.331140   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:44.331160   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:44.331471   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:44.331481   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:44.331544   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:44.331557   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:44.331564   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:44.331800   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:44.331814   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:44.331823   17586 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-335265"
	I1004 02:49:44.331830   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:44.332664   17586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:49:44.333537   17586 out.go:177] * Verifying csi-hostpath-driver addon...
	I1004 02:49:44.335020   17586 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1004 02:49:44.335757   17586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1004 02:49:44.336178   17586 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1004 02:49:44.336194   17586 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1004 02:49:44.342296   17586 api_server.go:279] https://192.168.39.175:8443/healthz returned 200:
	ok
	I1004 02:49:44.343523   17586 api_server.go:141] control plane version: v1.31.1
	I1004 02:49:44.343552   17586 api_server.go:131] duration metric: took 12.569393ms to wait for apiserver health ...
	I1004 02:49:44.343563   17586 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 02:49:44.348725   17586 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1004 02:49:44.348756   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:44.359730   17586 system_pods.go:59] 19 kube-system pods found
	I1004 02:49:44.359775   17586 system_pods.go:61] "coredns-7c65d6cfc9-2nft6" [010ae061-9933-4fcb-bb73-9c9607bea03e] Running
	I1004 02:49:44.359802   17586 system_pods.go:61] "coredns-7c65d6cfc9-vms49" [7ae77679-4aea-4650-b804-4b62d483ceb2] Running
	I1004 02:49:44.359815   17586 system_pods.go:61] "csi-hostpath-attacher-0" [f8cef70e-6711-4e45-986e-990453722a26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1004 02:49:44.359824   17586 system_pods.go:61] "csi-hostpath-resizer-0" [2dc3007a-e1ee-4845-88b8-512ac894863d] Pending
	I1004 02:49:44.359841   17586 system_pods.go:61] "csi-hostpathplugin-fzd54" [b04e23ab-8e0e-416e-8280-7dee1a52b8a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1004 02:49:44.359852   17586 system_pods.go:61] "etcd-addons-335265" [b1eb136d-5c61-4604-93df-2b7b04a05254] Running
	I1004 02:49:44.359861   17586 system_pods.go:61] "kube-apiserver-addons-335265" [1381dd5e-1b56-4429-93be-d878c04cb93c] Running
	I1004 02:49:44.359871   17586 system_pods.go:61] "kube-controller-manager-addons-335265" [6e317c8f-8e29-4a47-940d-1ca2ae208303] Running
	I1004 02:49:44.359883   17586 system_pods.go:61] "kube-ingress-dns-minikube" [3684f708-8dec-41cd-b503-58a74f8f3df3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1004 02:49:44.359893   17586 system_pods.go:61] "kube-proxy-sl5bg" [03727f31-3609-4d9c-ba1d-da91df4ce689] Running
	I1004 02:49:44.359900   17586 system_pods.go:61] "kube-scheduler-addons-335265" [9e73330c-1229-4615-b08a-ac733c781949] Running
	I1004 02:49:44.359913   17586 system_pods.go:61] "logviewer-7c79c8bcc9-ddvsm" [eaf2b3b6-6d22-4038-8bdc-d56ceebb3cb6] Pending / Ready:ContainersNotReady (containers with unready status: [logviewer]) / ContainersReady:ContainersNotReady (containers with unready status: [logviewer])
	I1004 02:49:44.359925   17586 system_pods.go:61] "metrics-server-84c5f94fbc-gqwd8" [6e302061-d82b-4ce2-b712-1faed975bc09] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 02:49:44.359940   17586 system_pods.go:61] "nvidia-device-plugin-daemonset-hk8t5" [9fc5b35d-0561-41df-ae69-27953695f6e2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1004 02:49:44.359954   17586 system_pods.go:61] "registry-66c9cd494c-nfhcd" [bf27c03f-b1e2-412d-a96b-4bb669dd6fd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1004 02:49:44.359967   17586 system_pods.go:61] "registry-proxy-csj4d" [b56921d1-efcc-463f-9f04-40fd7fde1775] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1004 02:49:44.359981   17586 system_pods.go:61] "snapshot-controller-56fcc65765-52lpd" [57e9c889-df7e-43b8-9cec-8ce9e6caaa21] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 02:49:44.359995   17586 system_pods.go:61] "snapshot-controller-56fcc65765-zkf5w" [68acf020-754d-4bb3-8793-bfd1aa2974dc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 02:49:44.360006   17586 system_pods.go:61] "storage-provisioner" [4f2eee80-691d-47ad-98f8-c06185ac9dec] Running
	I1004 02:49:44.360020   17586 system_pods.go:74] duration metric: took 16.443666ms to wait for pod list to return data ...
	I1004 02:49:44.360040   17586 default_sa.go:34] waiting for default service account to be created ...
	I1004 02:49:44.370151   17586 default_sa.go:45] found service account: "default"
	I1004 02:49:44.370264   17586 default_sa.go:55] duration metric: took 10.13041ms for default service account to be created ...
	I1004 02:49:44.370299   17586 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 02:49:44.420487   17586 system_pods.go:86] 19 kube-system pods found
	I1004 02:49:44.420516   17586 system_pods.go:89] "coredns-7c65d6cfc9-2nft6" [010ae061-9933-4fcb-bb73-9c9607bea03e] Running
	I1004 02:49:44.420523   17586 system_pods.go:89] "coredns-7c65d6cfc9-vms49" [7ae77679-4aea-4650-b804-4b62d483ceb2] Running
	I1004 02:49:44.420530   17586 system_pods.go:89] "csi-hostpath-attacher-0" [f8cef70e-6711-4e45-986e-990453722a26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1004 02:49:44.420536   17586 system_pods.go:89] "csi-hostpath-resizer-0" [2dc3007a-e1ee-4845-88b8-512ac894863d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1004 02:49:44.420548   17586 system_pods.go:89] "csi-hostpathplugin-fzd54" [b04e23ab-8e0e-416e-8280-7dee1a52b8a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1004 02:49:44.420554   17586 system_pods.go:89] "etcd-addons-335265" [b1eb136d-5c61-4604-93df-2b7b04a05254] Running
	I1004 02:49:44.420561   17586 system_pods.go:89] "kube-apiserver-addons-335265" [1381dd5e-1b56-4429-93be-d878c04cb93c] Running
	I1004 02:49:44.420568   17586 system_pods.go:89] "kube-controller-manager-addons-335265" [6e317c8f-8e29-4a47-940d-1ca2ae208303] Running
	I1004 02:49:44.420576   17586 system_pods.go:89] "kube-ingress-dns-minikube" [3684f708-8dec-41cd-b503-58a74f8f3df3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1004 02:49:44.420588   17586 system_pods.go:89] "kube-proxy-sl5bg" [03727f31-3609-4d9c-ba1d-da91df4ce689] Running
	I1004 02:49:44.420593   17586 system_pods.go:89] "kube-scheduler-addons-335265" [9e73330c-1229-4615-b08a-ac733c781949] Running
	I1004 02:49:44.420610   17586 system_pods.go:89] "logviewer-7c79c8bcc9-ddvsm" [eaf2b3b6-6d22-4038-8bdc-d56ceebb3cb6] Pending / Ready:ContainersNotReady (containers with unready status: [logviewer]) / ContainersReady:ContainersNotReady (containers with unready status: [logviewer])
	I1004 02:49:44.420621   17586 system_pods.go:89] "metrics-server-84c5f94fbc-gqwd8" [6e302061-d82b-4ce2-b712-1faed975bc09] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 02:49:44.420627   17586 system_pods.go:89] "nvidia-device-plugin-daemonset-hk8t5" [9fc5b35d-0561-41df-ae69-27953695f6e2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1004 02:49:44.420635   17586 system_pods.go:89] "registry-66c9cd494c-nfhcd" [bf27c03f-b1e2-412d-a96b-4bb669dd6fd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1004 02:49:44.420641   17586 system_pods.go:89] "registry-proxy-csj4d" [b56921d1-efcc-463f-9f04-40fd7fde1775] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1004 02:49:44.420650   17586 system_pods.go:89] "snapshot-controller-56fcc65765-52lpd" [57e9c889-df7e-43b8-9cec-8ce9e6caaa21] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 02:49:44.420664   17586 system_pods.go:89] "snapshot-controller-56fcc65765-zkf5w" [68acf020-754d-4bb3-8793-bfd1aa2974dc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 02:49:44.420673   17586 system_pods.go:89] "storage-provisioner" [4f2eee80-691d-47ad-98f8-c06185ac9dec] Running
	I1004 02:49:44.420683   17586 system_pods.go:126] duration metric: took 50.372052ms to wait for k8s-apps to be running ...
	I1004 02:49:44.420695   17586 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 02:49:44.420742   17586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:49:44.481983   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:44.482039   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:44.487276   17586 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1004 02:49:44.487302   17586 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1004 02:49:44.594177   17586 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1004 02:49:44.594202   17586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1004 02:49:44.682078   17586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1004 02:49:44.840475   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:44.968377   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:44.968695   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:45.340040   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:45.432078   17586 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.011312805s)
	I1004 02:49:45.432117   17586 system_svc.go:56] duration metric: took 1.011419417s WaitForService to wait for kubelet
	I1004 02:49:45.432139   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.273032903s)
	I1004 02:49:45.432133   17586 kubeadm.go:582] duration metric: took 12.12020663s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 02:49:45.432159   17586 node_conditions.go:102] verifying NodePressure condition ...
	I1004 02:49:45.432190   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:45.432209   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:45.432463   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:45.432540   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:45.432559   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:45.432572   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:45.432578   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:45.432756   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:45.432796   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:45.432821   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:45.435775   17586 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 02:49:45.435814   17586 node_conditions.go:123] node cpu capacity is 2
	I1004 02:49:45.435827   17586 node_conditions.go:105] duration metric: took 3.661104ms to run NodePressure ...
	I1004 02:49:45.435840   17586 start.go:241] waiting for startup goroutines ...
	I1004 02:49:45.467929   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:45.469005   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:45.843625   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:45.993157   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:45.995325   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:46.235243   17586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.553118798s)
	I1004 02:49:46.235311   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:46.235327   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:46.235624   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:46.235641   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:46.235661   17586 main.go:141] libmachine: (addons-335265) DBG | Closing plugin on server side
	I1004 02:49:46.235697   17586 main.go:141] libmachine: Making call to close driver server
	I1004 02:49:46.235724   17586 main.go:141] libmachine: (addons-335265) Calling .Close
	I1004 02:49:46.235930   17586 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:49:46.235945   17586 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:49:46.237482   17586 addons.go:475] Verifying addon gcp-auth=true in "addons-335265"
	I1004 02:49:46.239425   17586 out.go:177] * Verifying gcp-auth addon...
	I1004 02:49:46.241630   17586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1004 02:49:46.266963   17586 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1004 02:49:46.266983   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:46.350894   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:46.470052   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:46.470438   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:46.745485   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:46.841096   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:46.969566   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:46.970253   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:47.245437   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:47.347536   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:47.469779   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:47.470410   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:47.745449   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:47.840515   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:47.969775   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:47.971044   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:48.245171   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:48.340181   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:48.471580   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:48.471703   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:48.745266   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:48.843475   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:48.969449   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:48.969460   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:49.245802   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:49.341658   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:49.473759   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:49.474028   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:49.745751   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:49.841068   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:49.968769   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:49.969154   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:50.246074   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:50.339806   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:50.467775   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:50.469022   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:50.746574   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:50.840723   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:50.968464   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:50.968844   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:51.245619   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:51.343367   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:51.469611   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:51.469900   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:51.745723   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:51.840565   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:51.968983   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:51.969499   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:52.245248   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:52.340454   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:52.469211   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:52.469569   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:52.746021   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:52.841590   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:52.968317   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:52.968621   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:53.245560   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:53.340508   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:53.468580   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:53.473365   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:53.745845   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:53.841349   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:53.968856   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:53.969095   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:54.247577   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:54.341801   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:54.469627   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:54.471510   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:54.746671   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:54.841323   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:54.969621   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:54.969924   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:55.246190   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:55.341193   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:55.468975   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:55.469299   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:55.745364   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:55.840919   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:55.969021   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:55.969653   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:56.245144   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:56.340908   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:56.469542   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:56.469693   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:56.746668   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:56.840682   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:56.969422   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:56.970860   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:57.245749   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:57.340649   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:57.469240   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:57.469611   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:57.745759   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:57.841098   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:57.969593   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:57.970016   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:58.245128   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:58.341030   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:58.469209   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:58.469410   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:58.773786   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:58.874775   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:58.969215   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:58.969520   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:59.244986   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:59.339894   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:59.468115   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:59.470160   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:59.745355   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:49:59.841059   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:59.968810   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:59.969102   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:00.246067   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:00.340802   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:00.468135   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:00.468380   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:00.745614   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:00.841688   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:00.969241   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:00.969728   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:01.245662   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:01.340316   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:01.468953   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:01.469229   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:01.744731   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:01.840737   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:01.967705   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:01.968480   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:02.245302   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:02.340973   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:02.469424   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:02.469832   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:02.752146   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:02.841602   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:02.968518   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:02.968602   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:03.246014   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:03.339758   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:03.468625   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:03.468701   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:03.746128   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:03.840952   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:03.969137   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:03.969356   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:04.245669   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:04.340505   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:04.469434   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:04.470367   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:04.745271   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:04.844414   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:04.968983   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:04.969858   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:05.245975   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:05.341707   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:05.469509   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:05.471502   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:05.744938   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:05.841778   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:05.969657   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:05.969788   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:06.246060   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:06.342060   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:06.469180   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:06.469183   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:06.745776   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:06.840758   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:06.968790   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:06.969009   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:07.245528   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:07.340020   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:07.468969   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:07.469044   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:07.744940   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:07.841369   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:07.969128   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:07.969181   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:08.245230   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:08.340571   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:08.469010   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:08.469908   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:08.745616   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:08.840736   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:08.968971   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:08.969575   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:09.245326   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:09.359136   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:09.467813   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:09.468992   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:09.746293   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:09.840450   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:09.969665   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:09.970228   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:10.244757   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:10.341541   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:10.469322   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:10.469662   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:10.746219   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:10.842293   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:10.969724   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:10.970388   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:11.244701   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:11.340876   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:11.469396   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:11.469799   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:11.745573   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:11.841776   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:11.968419   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:11.968720   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:12.246376   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:12.342276   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:12.469324   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:12.469727   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:12.745846   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:12.841395   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:12.975823   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:12.975989   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:13.247657   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:13.340541   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:13.468808   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:13.468936   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:13.745354   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:13.841186   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:13.968444   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:13.969007   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:14.246708   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:14.341716   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:14.468737   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:14.469321   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:14.745375   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:14.969132   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:14.970969   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:14.974481   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:15.245131   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:15.339965   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:15.468131   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:15.469116   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:15.745906   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:15.847336   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:15.968989   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:15.969125   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:16.246214   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:16.342784   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:16.468859   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:16.469534   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:16.744824   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:16.841964   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:16.968490   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:16.969554   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:17.245778   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:17.341170   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:17.468737   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:17.469234   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:17.745719   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:17.840537   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:17.968710   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:17.970973   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:18.245827   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:18.340600   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:18.468927   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:18.469056   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:18.745894   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:18.840993   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:18.968662   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:18.969588   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:19.245724   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:19.347561   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:19.469658   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:19.470830   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:19.746015   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:19.841746   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:19.969265   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:19.969911   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:20.246107   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:20.340671   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:20.469023   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:20.469306   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:20.745276   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:20.840320   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:20.968182   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:20.968618   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:21.295214   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:21.342350   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:21.474232   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:21.474704   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:21.746019   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:21.840073   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:21.968686   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:21.968883   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:22.245462   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:22.340560   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:22.469308   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:22.469628   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:22.745548   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:22.840729   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:22.968686   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:22.970021   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:23.244972   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:23.342661   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:23.468579   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:23.469885   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:23.745492   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:23.842311   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:23.968652   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:23.969795   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:24.245950   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:24.341738   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:24.468766   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:24.469203   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:24.746105   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:24.840645   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:24.968939   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:24.969048   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:25.245098   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:25.341531   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:25.468997   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:25.469022   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:25.745274   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:25.840707   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:25.968510   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:25.968829   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:26.245912   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:26.341014   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:26.468615   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:26.469002   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:26.746277   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:26.840693   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:26.971379   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:26.971812   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:27.255084   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:27.343350   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:27.469658   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:27.470073   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:27.746467   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:27.841488   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:27.968366   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:27.969354   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:28.245274   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:28.340766   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:28.468478   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:28.468863   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:28.745715   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:28.842149   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:28.969001   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:28.969558   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:29.246174   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:29.340701   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:29.470362   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:29.470409   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:29.866701   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:29.867799   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:29.968679   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:29.969428   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:30.245747   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:30.341416   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:30.468295   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:30.468513   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:30.745295   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:30.840316   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:30.968994   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:30.969291   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:31.260366   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:31.363925   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:31.601991   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:31.602101   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:31.746832   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:31.843222   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:31.969229   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:31.969897   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:32.246611   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:32.340701   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:32.469558   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:32.470623   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:32.748816   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:32.840852   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:32.967969   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:32.968308   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:33.247615   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:33.341740   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:33.468781   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:33.469050   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:33.745474   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:33.841085   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:33.968170   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:33.968693   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:34.246264   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:34.340367   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:34.468299   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:34.468747   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:34.746055   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:34.842944   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:34.968409   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:34.969114   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:35.245369   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:35.340753   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:35.471264   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:35.471350   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:35.745222   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:35.840472   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:35.968223   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:35.968365   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:36.246074   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:36.340086   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:36.468055   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:36.468198   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:36.744977   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:36.839883   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:36.968208   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:36.968525   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:37.245512   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:37.340720   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:37.472391   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:37.472806   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:37.747473   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:37.840859   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:37.968649   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:37.969217   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:38.244944   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:38.340831   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:38.469874   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:38.470270   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:38.747502   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:38.848515   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:38.968242   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:50:38.969019   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:39.246532   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:39.340418   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:39.468725   17586 kapi.go:107] duration metric: took 56.504858137s to wait for kubernetes.io/minikube-addons=registry ...
	I1004 02:50:39.469188   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:39.745137   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:39.840510   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:39.969117   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:40.245600   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:40.340848   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:40.468785   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:40.745843   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:40.841338   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:40.969196   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:41.245489   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:41.340451   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:41.468737   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:41.745759   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:41.841399   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:41.968472   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:42.245329   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:42.440188   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:42.469128   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:42.745847   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:42.840815   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:42.968614   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:43.245330   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:43.339936   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:43.470229   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:43.748229   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:43.847409   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:43.969405   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:44.245823   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:44.347707   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:44.469145   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:44.745765   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:44.840690   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:44.968922   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:45.245848   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:45.341333   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:45.474248   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:45.746235   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:45.840861   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:45.968763   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:46.247265   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:46.343845   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:46.467461   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:46.745527   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:46.840750   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:46.969455   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:47.245470   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:47.340916   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:47.484243   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:47.746347   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:47.848367   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:47.969120   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:48.245579   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:48.340848   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:48.468256   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:48.747428   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:48.840805   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:48.971165   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:49.245816   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:49.348301   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:49.476521   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:49.745694   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:49.841599   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:49.968223   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:50.246595   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:50.340959   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:50.468104   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:50.746347   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:50.848063   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:50.969292   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:51.247208   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:51.340371   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:51.469108   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:51.746057   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:51.849793   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:51.969072   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:52.244831   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:52.341056   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:52.469322   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:52.745196   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:52.840563   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:53.111192   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:53.244936   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:53.343976   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:53.471545   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:53.746426   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:53.843193   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:53.970397   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:54.247890   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:54.341959   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:54.474960   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:54.746933   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:54.841441   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:54.967989   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:55.247582   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:55.340153   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:55.468318   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:55.746499   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:55.841750   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:55.973236   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:56.249197   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:56.340171   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:56.876849   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:56.877209   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:56.879041   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:56.973709   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:57.245308   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:57.340209   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:57.468048   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:57.746390   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:57.840908   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:57.967644   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:58.245592   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:58.341448   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:58.468067   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:58.751205   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:58.846188   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:58.970831   17586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:59.246278   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:59.340430   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:50:59.857166   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:59.859292   17586 kapi.go:107] duration metric: took 1m16.895419035s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1004 02:50:59.861007   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:00.246687   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:00.341082   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:00.745779   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:00.841570   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:01.247221   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:01.340026   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:01.746716   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:01.848403   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:02.252681   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:02.362907   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:02.754433   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:02.842090   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:03.245984   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:03.341521   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:03.746257   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:03.840435   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:04.245794   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:04.341543   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:04.746584   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:04.847725   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:05.246486   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:05.341369   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:05.745443   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:05.840590   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:06.245764   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:06.341452   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:06.746300   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:06.840173   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:07.245974   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:07.342299   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:51:07.746320   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:07.840606   17586 kapi.go:107] duration metric: took 1m23.504844495s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1004 02:51:08.245959   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:08.745269   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:09.246363   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:09.746099   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:10.246354   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:10.746224   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:11.246454   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:11.744947   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:12.246099   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:12.746126   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:13.245694   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:13.746043   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:14.246221   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:14.746127   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:15.246082   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:15.746023   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:16.245963   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:16.746251   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:17.245749   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:17.746499   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:18.245972   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:18.746574   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:19.245494   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:19.745988   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:20.245924   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:20.745774   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:21.245810   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:21.745368   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:22.246344   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:22.746450   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:23.246182   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:23.746471   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:24.246589   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:24.745434   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:25.247295   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:25.746293   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:26.246630   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:26.745524   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:27.245748   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:27.746316   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:28.246829   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:28.745571   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:29.246166   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:29.746103   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:30.246477   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:30.746349   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:31.246973   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:31.745800   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:32.246183   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:32.746732   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:33.245535   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:33.745112   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:34.246490   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:34.745218   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:35.245995   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:35.745573   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:36.245361   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:36.746021   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:37.245923   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:37.745458   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:38.245995   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:38.746059   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:39.246387   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:39.745838   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:40.245935   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:40.745542   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:41.246971   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:41.745681   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:42.245593   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:42.746860   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:43.245745   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:43.746238   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:44.246420   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:44.746678   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:45.245206   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:45.746024   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:46.245747   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:46.745220   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:47.246406   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:47.746778   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:48.245804   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:48.748224   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:49.246411   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:49.746311   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:50.246717   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:50.745550   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:51.246298   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:51.746806   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:52.246046   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:52.745889   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:53.245684   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:53.745288   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:54.247609   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:54.745850   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:55.245776   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:55.746173   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:56.246715   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:56.745348   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:57.246410   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:57.746240   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:58.246472   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:58.745882   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:59.245926   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:59.745973   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:00.250275   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:00.746630   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:01.245231   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:01.746948   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:02.246143   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:02.746051   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:03.246149   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:03.746166   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:04.248198   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:04.745864   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:05.245404   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:05.744901   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:06.245761   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:06.745743   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:07.244993   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:07.745878   17586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:52:08.245943   17586 kapi.go:107] duration metric: took 2m22.004306963s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1004 02:52:08.248036   17586 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-335265 cluster.
	I1004 02:52:08.249477   17586 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1004 02:52:08.250882   17586 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1004 02:52:08.252147   17586 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, logviewer, inspektor-gadget, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1004 02:52:08.253497   17586 addons.go:510] duration metric: took 2m34.941548087s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin logviewer inspektor-gadget cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1004 02:52:08.253539   17586 start.go:246] waiting for cluster config update ...
	I1004 02:52:08.253559   17586 start.go:255] writing updated cluster config ...
	I1004 02:52:08.253804   17586 ssh_runner.go:195] Run: rm -f paused
	I1004 02:52:08.314013   17586 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 02:52:08.315923   17586 out.go:177] * Done! kubectl is now configured to use "addons-335265" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.629733081Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e4fdde2997b249a4b161ddcd9e43c5acafc266b3158b19715d6163ef6bd52558,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-psznb,Uid:301f4c8d-964f-4a62-b1f7-a1c5a2ede151,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011001007872860,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-psznb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 301f4c8d-964f-4a62-b1f7-a1c5a2ede151,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T03:03:20.686405897Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6083e583ec12bda5c7c1db60a8d04dfa68bb350b58fa6b890daec0131dae61f3,Metadata:&PodSandboxMetadata{Name:nginx,Uid:d3df1714-d414-4b36-9919-09dcd9c98407,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1728010857041207872,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3df1714-d414-4b36-9919-09dcd9c98407,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T03:00:56.731474247Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6deed65f66ba17a0e483da06dc0240e649915bc317b911ecb9c3d2a227c66639,Metadata:&PodSandboxMetadata{Name:busybox,Uid:ea289386-a580-4a9e-ba94-c28adf57b2a0,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728010329221385684,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea289386-a580-4a9e-ba94-c28adf57b2a0,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T02:52:08.907568671Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0f84e6d72a9217443d
0848dc0605954da3ce2876386aab2751fbc947a1336944,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-gqwd8,Uid:6e302061-d82b-4ce2-b712-1faed975bc09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728010179146176629,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gqwd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e302061-d82b-4ce2-b712-1faed975bc09,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T02:49:38.786742054Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ddb6930f4ddcceba53c7b558396e67678e379637fda6cc0135e60b8fbeeece61,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4f2eee80-691d-47ad-98f8-c06185ac9dec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728010178848125335,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kuber
netes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2eee80-691d-47ad-98f8-c06185ac9dec,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-04T02:49:38.413624391Z,kubernetes.io/config.source: api,},RuntimeHandler:,
},&PodSandbox{Id:94c3a6c8d2150d0e628678da03ef6b06d35a62e5fcc3e8b96f25df426831092b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-2nft6,Uid:010ae061-9933-4fcb-bb73-9c9607bea03e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728010173983491051,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-2nft6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010ae061-9933-4fcb-bb73-9c9607bea03e,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T02:49:33.676057495Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6d8fd99c1ac4ba73404693ec6d04fc898e5af9b1162425c587d2928e26683aa7,Metadata:&PodSandboxMetadata{Name:kube-proxy-sl5bg,Uid:03727f31-3609-4d9c-ba1d-da91df4ce689,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728010173835805465,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kub
ernetes.pod.name: kube-proxy-sl5bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03727f31-3609-4d9c-ba1d-da91df4ce689,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T02:49:32.897981919Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:880ccf5d995f02005326a0bb9dd0b6fcc8df03d6b9cd832420b50c4543927790,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-335265,Uid:b21430d03e15a45a1ab18bb07d4ac67d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728010162566391880,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21430d03e15a45a1ab18bb07d4ac67d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.175:8443,kubernetes.io/config.hash: b21430d03e15a45a1ab18bb07d4ac67d,ku
bernetes.io/config.seen: 2024-10-04T02:49:21.887164921Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:12a414a780e4d26c78b5163ceec29c6a54b381f80df20b4309014482eb74974b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-335265,Uid:3fff4526c35266ee7fcdec7c8f648cb4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728010162564484315,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fff4526c35266ee7fcdec7c8f648cb4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3fff4526c35266ee7fcdec7c8f648cb4,kubernetes.io/config.seen: 2024-10-04T02:49:21.887169774Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:096fdc10579a15c8c2eddf3947c6b0cbefe973c7d41b1405499cf67ecefd3ce6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-335265,Uid:d3ecef9a7daca0f7be3ebc78f3ff39fb,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728010162553688899,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ecef9a7daca0f7be3ebc78f3ff39fb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d3ecef9a7daca0f7be3ebc78f3ff39fb,kubernetes.io/config.seen: 2024-10-04T02:49:21.887168634Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4dcc7d42629a67aa6421922eba8b7fac78a019c987f5ff93778b80bc44357849,Metadata:&PodSandboxMetadata{Name:etcd-addons-335265,Uid:b2e8996f305a3968d1f41a37dcaab714,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728010162545816836,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e8996f305a3968d1f41a37dcaab
714,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.175:2379,kubernetes.io/config.hash: b2e8996f305a3968d1f41a37dcaab714,kubernetes.io/config.seen: 2024-10-04T02:49:21.887170696Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=34f4f90e-b221-4914-9152-1decb5429ac6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.630358431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=924cb201-07af-42c5-881c-fc3fdf6c627e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.630431085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=924cb201-07af-42c5-881c-fc3fdf6c627e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.630647995Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b6c4dec93be26bbd058dedf47a91a9cd6ae134d1b1c44d30fd41d0836ac9925,PodSandboxId:e4fdde2997b249a4b161ddcd9e43c5acafc266b3158b19715d6163ef6bd52558,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728011003886524029,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-psznb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 301f4c8d-964f-4a62-b1f7-a1c5a2ede151,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:105abce822ec642566111ef308ea64f204cf359d5e083b76e3cbd45dfea09c1f,PodSandboxId:6deed65f66ba17a0e483da06dc0240e649915bc317b911ecb9c3d2a227c66639,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728010977911148213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea289386-a580-4a9e-ba94-c28adf57b2a0,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f80532df2b54eeca5dea38973c5abc06eee1fc3573680405e3904fd4c58bfd,PodSandboxId:6083e583ec12bda5c7c1db60a8d04dfa68bb350b58fa6b890daec0131dae61f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728010861088743567,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3df1714-d414-4b36-9919-09dcd9c98407,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b277ba738bfb71628250b79299966e29729f7c928b6b565a54f15ec1bed59c7,PodSandboxId:0f84e6d72a9217443d0848dc0605954da3ce2876386aab2751fbc947a1336944,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728010212052406010,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gqwd8,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6e302061-d82b-4ce2-b712-1faed975bc09,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fde3be5e7a73c8d3397f8689977a297aacafe17392798a26c4e03f5f106932,PodSandboxId:ddb6930f4ddcceba53c7b558396e67678e379637fda6cc0135e60b8fbeeece61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728010180194531619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2eee80-691d-47ad-98f8-c06185ac9dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b7fed1985f4421f5e6f8a30150b722e9899c6a80b3c56e4f334760b4d51a731,PodSandboxId:94c3a6c8d2150d0e628678da03ef6b06d35a62e5fcc3e8b96f25df426831092b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728010177265037302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-2nft6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010ae061-9933-4fcb-bb73-9c9607bea03e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f3cc713fb4a168b6df4ffcd7057af0478b65edb2a6bb1e98a6ccf04c070dad3,PodSandboxId:6d8fd99c1ac4ba73404693ec6d04fc898e5af9b1162425c587d2928e26683aa7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728010174523721803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sl5bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03727f31-3609-4d9c-ba1d-da91df4ce689,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc78c7278537d65c513bbae8c60fcda039c61c3bdfbf5c3850c0337758bad526,PodSandboxId:4dcc7d42629a67aa6421922eba8b7fac78a019c987f5ff93778b80bc44357849,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728010162871533429,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e8996f305a3968d1f41a37dcaab714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01dc9a32ed2252be473b6a4ae9f6df2dc8b65d8e0e1961230f1357f0cda4d714,PodSandboxId:880ccf5d995f02005326a0bb9dd0b6fcc8df03d6b9cd832420b50c4543927790,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Create
dAt:1728010162801324750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21430d03e15a45a1ab18bb07d4ac67d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad952c65d22cc95d5aeadd40b7d1fd530cf827d47d8f275a984883f69649b304,PodSandboxId:12a414a780e4d26c78b5163ceec29c6a54b381f80df20b4309014482eb74974b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728010162753
992190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fff4526c35266ee7fcdec7c8f648cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a5d8f06322eea43987c838dfb9b3c0f18ab742d3357eb113609a9784752bf4,PodSandboxId:096fdc10579a15c8c2eddf3947c6b0cbefe973c7d41b1405499cf67ecefd3ce6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728010162706974061,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ecef9a7daca0f7be3ebc78f3ff39fb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=924cb201-07af-42c5-881c-fc3fdf6c627e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.645980356Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54cfa8cc-54db-439e-be10-75368e2c60ac name=/runtime.v1.RuntimeService/Version
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.646073391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54cfa8cc-54db-439e-be10-75368e2c60ac name=/runtime.v1.RuntimeService/Version
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.647034971Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8c425dc-7dae-49c7-9a2e-febb20619bea name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.649166582Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011112649134702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8c425dc-7dae-49c7-9a2e-febb20619bea name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.649751581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfb02ab1-7977-4fa5-a09e-41d27c73b8b5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.649827329Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfb02ab1-7977-4fa5-a09e-41d27c73b8b5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.650068477Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b6c4dec93be26bbd058dedf47a91a9cd6ae134d1b1c44d30fd41d0836ac9925,PodSandboxId:e4fdde2997b249a4b161ddcd9e43c5acafc266b3158b19715d6163ef6bd52558,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728011003886524029,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-psznb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 301f4c8d-964f-4a62-b1f7-a1c5a2ede151,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:105abce822ec642566111ef308ea64f204cf359d5e083b76e3cbd45dfea09c1f,PodSandboxId:6deed65f66ba17a0e483da06dc0240e649915bc317b911ecb9c3d2a227c66639,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728010977911148213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea289386-a580-4a9e-ba94-c28adf57b2a0,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f80532df2b54eeca5dea38973c5abc06eee1fc3573680405e3904fd4c58bfd,PodSandboxId:6083e583ec12bda5c7c1db60a8d04dfa68bb350b58fa6b890daec0131dae61f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728010861088743567,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3df1714-d414-4b36-9919-09dcd9c98407,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b277ba738bfb71628250b79299966e29729f7c928b6b565a54f15ec1bed59c7,PodSandboxId:0f84e6d72a9217443d0848dc0605954da3ce2876386aab2751fbc947a1336944,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728010212052406010,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gqwd8,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6e302061-d82b-4ce2-b712-1faed975bc09,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fde3be5e7a73c8d3397f8689977a297aacafe17392798a26c4e03f5f106932,PodSandboxId:ddb6930f4ddcceba53c7b558396e67678e379637fda6cc0135e60b8fbeeece61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728010180194531619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2eee80-691d-47ad-98f8-c06185ac9dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b7fed1985f4421f5e6f8a30150b722e9899c6a80b3c56e4f334760b4d51a731,PodSandboxId:94c3a6c8d2150d0e628678da03ef6b06d35a62e5fcc3e8b96f25df426831092b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728010177265037302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-2nft6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010ae061-9933-4fcb-bb73-9c9607bea03e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f3cc713fb4a168b6df4ffcd7057af0478b65edb2a6bb1e98a6ccf04c070dad3,PodSandboxId:6d8fd99c1ac4ba73404693ec6d04fc898e5af9b1162425c587d2928e26683aa7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728010174523721803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sl5bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03727f31-3609-4d9c-ba1d-da91df4ce689,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc78c7278537d65c513bbae8c60fcda039c61c3bdfbf5c3850c0337758bad526,PodSandboxId:4dcc7d42629a67aa6421922eba8b7fac78a019c987f5ff93778b80bc44357849,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728010162871533429,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e8996f305a3968d1f41a37dcaab714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01dc9a32ed2252be473b6a4ae9f6df2dc8b65d8e0e1961230f1357f0cda4d714,PodSandboxId:880ccf5d995f02005326a0bb9dd0b6fcc8df03d6b9cd832420b50c4543927790,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Create
dAt:1728010162801324750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21430d03e15a45a1ab18bb07d4ac67d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad952c65d22cc95d5aeadd40b7d1fd530cf827d47d8f275a984883f69649b304,PodSandboxId:12a414a780e4d26c78b5163ceec29c6a54b381f80df20b4309014482eb74974b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728010162753
992190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fff4526c35266ee7fcdec7c8f648cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a5d8f06322eea43987c838dfb9b3c0f18ab742d3357eb113609a9784752bf4,PodSandboxId:096fdc10579a15c8c2eddf3947c6b0cbefe973c7d41b1405499cf67ecefd3ce6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728010162706974061,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ecef9a7daca0f7be3ebc78f3ff39fb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfb02ab1-7977-4fa5-a09e-41d27c73b8b5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.692474871Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8106e915-62a7-45c2-8fe5-4e28a2db6012 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.692568245Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8106e915-62a7-45c2-8fe5-4e28a2db6012 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.693734977Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67b3e2c4-1151-48fa-8f47-ed369bb0ecb9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.694976345Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011112694945536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67b3e2c4-1151-48fa-8f47-ed369bb0ecb9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.695584432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=176bfe1e-3735-49e0-861b-ce17b68ee2a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.695654903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=176bfe1e-3735-49e0-861b-ce17b68ee2a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.695899154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b6c4dec93be26bbd058dedf47a91a9cd6ae134d1b1c44d30fd41d0836ac9925,PodSandboxId:e4fdde2997b249a4b161ddcd9e43c5acafc266b3158b19715d6163ef6bd52558,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728011003886524029,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-psznb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 301f4c8d-964f-4a62-b1f7-a1c5a2ede151,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:105abce822ec642566111ef308ea64f204cf359d5e083b76e3cbd45dfea09c1f,PodSandboxId:6deed65f66ba17a0e483da06dc0240e649915bc317b911ecb9c3d2a227c66639,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728010977911148213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea289386-a580-4a9e-ba94-c28adf57b2a0,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f80532df2b54eeca5dea38973c5abc06eee1fc3573680405e3904fd4c58bfd,PodSandboxId:6083e583ec12bda5c7c1db60a8d04dfa68bb350b58fa6b890daec0131dae61f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728010861088743567,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3df1714-d414-4b36-9919-09dcd9c98407,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b277ba738bfb71628250b79299966e29729f7c928b6b565a54f15ec1bed59c7,PodSandboxId:0f84e6d72a9217443d0848dc0605954da3ce2876386aab2751fbc947a1336944,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728010212052406010,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gqwd8,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6e302061-d82b-4ce2-b712-1faed975bc09,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fde3be5e7a73c8d3397f8689977a297aacafe17392798a26c4e03f5f106932,PodSandboxId:ddb6930f4ddcceba53c7b558396e67678e379637fda6cc0135e60b8fbeeece61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728010180194531619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2eee80-691d-47ad-98f8-c06185ac9dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b7fed1985f4421f5e6f8a30150b722e9899c6a80b3c56e4f334760b4d51a731,PodSandboxId:94c3a6c8d2150d0e628678da03ef6b06d35a62e5fcc3e8b96f25df426831092b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728010177265037302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-2nft6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010ae061-9933-4fcb-bb73-9c9607bea03e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f3cc713fb4a168b6df4ffcd7057af0478b65edb2a6bb1e98a6ccf04c070dad3,PodSandboxId:6d8fd99c1ac4ba73404693ec6d04fc898e5af9b1162425c587d2928e26683aa7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728010174523721803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sl5bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03727f31-3609-4d9c-ba1d-da91df4ce689,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc78c7278537d65c513bbae8c60fcda039c61c3bdfbf5c3850c0337758bad526,PodSandboxId:4dcc7d42629a67aa6421922eba8b7fac78a019c987f5ff93778b80bc44357849,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728010162871533429,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e8996f305a3968d1f41a37dcaab714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01dc9a32ed2252be473b6a4ae9f6df2dc8b65d8e0e1961230f1357f0cda4d714,PodSandboxId:880ccf5d995f02005326a0bb9dd0b6fcc8df03d6b9cd832420b50c4543927790,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Create
dAt:1728010162801324750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21430d03e15a45a1ab18bb07d4ac67d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad952c65d22cc95d5aeadd40b7d1fd530cf827d47d8f275a984883f69649b304,PodSandboxId:12a414a780e4d26c78b5163ceec29c6a54b381f80df20b4309014482eb74974b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728010162753
992190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fff4526c35266ee7fcdec7c8f648cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a5d8f06322eea43987c838dfb9b3c0f18ab742d3357eb113609a9784752bf4,PodSandboxId:096fdc10579a15c8c2eddf3947c6b0cbefe973c7d41b1405499cf67ecefd3ce6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728010162706974061,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ecef9a7daca0f7be3ebc78f3ff39fb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=176bfe1e-3735-49e0-861b-ce17b68ee2a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.733926917Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e2ca99c-3f00-4650-807c-0caa077ac291 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.734001570Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e2ca99c-3f00-4650-807c-0caa077ac291 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.735159624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d52dfc3-8029-4d27-9de5-6cca50952be0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.736617084Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011112736586113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d52dfc3-8029-4d27-9de5-6cca50952be0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.737865548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ad9b870-755e-4285-b48c-f2a652023d26 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.737931772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ad9b870-755e-4285-b48c-f2a652023d26 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:05:12 addons-335265 crio[659]: time="2024-10-04 03:05:12.738174969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b6c4dec93be26bbd058dedf47a91a9cd6ae134d1b1c44d30fd41d0836ac9925,PodSandboxId:e4fdde2997b249a4b161ddcd9e43c5acafc266b3158b19715d6163ef6bd52558,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728011003886524029,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-psznb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 301f4c8d-964f-4a62-b1f7-a1c5a2ede151,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:105abce822ec642566111ef308ea64f204cf359d5e083b76e3cbd45dfea09c1f,PodSandboxId:6deed65f66ba17a0e483da06dc0240e649915bc317b911ecb9c3d2a227c66639,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728010977911148213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea289386-a580-4a9e-ba94-c28adf57b2a0,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f80532df2b54eeca5dea38973c5abc06eee1fc3573680405e3904fd4c58bfd,PodSandboxId:6083e583ec12bda5c7c1db60a8d04dfa68bb350b58fa6b890daec0131dae61f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728010861088743567,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3df1714-d414-4b36-9919-09dcd9c98407,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b277ba738bfb71628250b79299966e29729f7c928b6b565a54f15ec1bed59c7,PodSandboxId:0f84e6d72a9217443d0848dc0605954da3ce2876386aab2751fbc947a1336944,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728010212052406010,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gqwd8,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6e302061-d82b-4ce2-b712-1faed975bc09,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fde3be5e7a73c8d3397f8689977a297aacafe17392798a26c4e03f5f106932,PodSandboxId:ddb6930f4ddcceba53c7b558396e67678e379637fda6cc0135e60b8fbeeece61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728010180194531619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f2eee80-691d-47ad-98f8-c06185ac9dec,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b7fed1985f4421f5e6f8a30150b722e9899c6a80b3c56e4f334760b4d51a731,PodSandboxId:94c3a6c8d2150d0e628678da03ef6b06d35a62e5fcc3e8b96f25df426831092b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728010177265037302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-2nft6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 010ae061-9933-4fcb-bb73-9c9607bea03e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f3cc713fb4a168b6df4ffcd7057af0478b65edb2a6bb1e98a6ccf04c070dad3,PodSandboxId:6d8fd99c1ac4ba73404693ec6d04fc898e5af9b1162425c587d2928e26683aa7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728010174523721803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sl5bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03727f31-3609-4d9c-ba1d-da91df4ce689,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc78c7278537d65c513bbae8c60fcda039c61c3bdfbf5c3850c0337758bad526,PodSandboxId:4dcc7d42629a67aa6421922eba8b7fac78a019c987f5ff93778b80bc44357849,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728010162871533429,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e8996f305a3968d1f41a37dcaab714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01dc9a32ed2252be473b6a4ae9f6df2dc8b65d8e0e1961230f1357f0cda4d714,PodSandboxId:880ccf5d995f02005326a0bb9dd0b6fcc8df03d6b9cd832420b50c4543927790,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Create
dAt:1728010162801324750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21430d03e15a45a1ab18bb07d4ac67d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad952c65d22cc95d5aeadd40b7d1fd530cf827d47d8f275a984883f69649b304,PodSandboxId:12a414a780e4d26c78b5163ceec29c6a54b381f80df20b4309014482eb74974b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728010162753
992190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fff4526c35266ee7fcdec7c8f648cb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a5d8f06322eea43987c838dfb9b3c0f18ab742d3357eb113609a9784752bf4,PodSandboxId:096fdc10579a15c8c2eddf3947c6b0cbefe973c7d41b1405499cf67ecefd3ce6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728010162706974061,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-335265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ecef9a7daca0f7be3ebc78f3ff39fb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ad9b870-755e-4285-b48c-f2a652023d26 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0b6c4dec93be2       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   e4fdde2997b24       hello-world-app-55bf9c44b4-psznb
	105abce822ec6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     2 minutes ago        Running             busybox                   0                   6deed65f66ba1       busybox
	19f80532df2b5       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         4 minutes ago        Running             nginx                     0                   6083e583ec12b       nginx
	2b277ba738bfb       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   15 minutes ago       Running             metrics-server            0                   0f84e6d72a921       metrics-server-84c5f94fbc-gqwd8
	70fde3be5e7a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago       Running             storage-provisioner       0                   ddb6930f4ddcc       storage-provisioner
	6b7fed1985f44       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago       Running             coredns                   0                   94c3a6c8d2150       coredns-7c65d6cfc9-2nft6
	8f3cc713fb4a1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago       Running             kube-proxy                0                   6d8fd99c1ac4b       kube-proxy-sl5bg
	fc78c7278537d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago       Running             etcd                      0                   4dcc7d42629a6       etcd-addons-335265
	01dc9a32ed225       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago       Running             kube-apiserver            0                   880ccf5d995f0       kube-apiserver-addons-335265
	ad952c65d22cc       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago       Running             kube-scheduler            0                   12a414a780e4d       kube-scheduler-addons-335265
	a0a5d8f06322e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago       Running             kube-controller-manager   0                   096fdc10579a1       kube-controller-manager-addons-335265
	
	
	==> coredns [6b7fed1985f4421f5e6f8a30150b722e9899c6a80b3c56e4f334760b4d51a731] <==
	[INFO] 10.244.0.21:35986 - 30420 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000199743s
	[INFO] 10.244.0.21:35986 - 43651 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000113678s
	[INFO] 10.244.0.21:35986 - 7980 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00011322s
	[INFO] 10.244.0.21:37878 - 5265 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00023679s
	[INFO] 10.244.0.21:35986 - 12097 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000540597s
	[INFO] 10.244.0.21:35986 - 10445 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000133937s
	[INFO] 10.244.0.21:37878 - 38075 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000763116s
	[INFO] 10.244.0.21:35986 - 42585 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000527456s
	[INFO] 10.244.0.21:37878 - 54702 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047134s
	[INFO] 10.244.0.21:37878 - 50333 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000112025s
	[INFO] 10.244.0.21:35986 - 16603 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053131s
	[INFO] 10.244.0.21:34594 - 20067 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000133762s
	[INFO] 10.244.0.21:48356 - 30400 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000108109s
	[INFO] 10.244.0.21:48356 - 32702 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000112044s
	[INFO] 10.244.0.21:34594 - 17788 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000124251s
	[INFO] 10.244.0.21:34594 - 15215 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006539s
	[INFO] 10.244.0.21:48356 - 33518 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000093702s
	[INFO] 10.244.0.21:48356 - 34315 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000099333s
	[INFO] 10.244.0.21:48356 - 11161 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004258s
	[INFO] 10.244.0.21:34594 - 11097 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000078415s
	[INFO] 10.244.0.21:34594 - 23450 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000141359s
	[INFO] 10.244.0.21:48356 - 30448 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000144664s
	[INFO] 10.244.0.21:34594 - 24655 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077343s
	[INFO] 10.244.0.21:34594 - 32680 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064643s
	[INFO] 10.244.0.21:48356 - 38646 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000063007s
	
	
	==> describe nodes <==
	Name:               addons-335265
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-335265
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=addons-335265
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T02_49_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-335265
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 02:49:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-335265
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:05:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:03:33 +0000   Fri, 04 Oct 2024 02:49:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:03:33 +0000   Fri, 04 Oct 2024 02:49:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:03:33 +0000   Fri, 04 Oct 2024 02:49:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:03:33 +0000   Fri, 04 Oct 2024 02:49:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.175
	  Hostname:    addons-335265
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 c63f8ecb0fea4cd4b9fc51defdeb350d
	  System UUID:                c63f8ecb-0fea-4cd4-b9fc-51defdeb350d
	  Boot ID:                    5504ac08-d55b-4b4c-bcaa-04cfbdf152d8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-psznb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 coredns-7c65d6cfc9-2nft6                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-335265                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-335265             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-335265    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-sl5bg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-335265             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-gqwd8          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         15m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node addons-335265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node addons-335265 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node addons-335265 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m   kubelet          Node addons-335265 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node addons-335265 event: Registered Node addons-335265 in Controller
	
	
	==> dmesg <==
	[  +0.172818] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.053525] kauditd_printk_skb: 103 callbacks suppressed
	[  +5.573977] kauditd_printk_skb: 134 callbacks suppressed
	[  +7.161714] kauditd_printk_skb: 88 callbacks suppressed
	[Oct 4 02:50] kauditd_printk_skb: 4 callbacks suppressed
	[ +17.133021] kauditd_printk_skb: 24 callbacks suppressed
	[ +11.239235] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.444647] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.010088] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.520076] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 4 02:51] kauditd_printk_skb: 16 callbacks suppressed
	[Oct 4 02:52] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 4 03:00] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.933001] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.673254] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.106954] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.143709] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.350438] kauditd_printk_skb: 15 callbacks suppressed
	[ +13.829990] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 4 03:01] kauditd_printk_skb: 11 callbacks suppressed
	[ +17.597462] kauditd_printk_skb: 15 callbacks suppressed
	[ +14.762789] kauditd_printk_skb: 59 callbacks suppressed
	[  +6.665083] kauditd_printk_skb: 26 callbacks suppressed
	[Oct 4 03:02] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 4 03:03] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [fc78c7278537d65c513bbae8c60fcda039c61c3bdfbf5c3850c0337758bad526] <==
	{"level":"info","ts":"2024-10-04T02:50:59.807119Z","caller":"traceutil/trace.go:171","msg":"trace[954465718] transaction","detail":"{read_only:false; response_revision:1090; number_of_response:1; }","duration":"432.24846ms","start":"2024-10-04T02:50:59.374861Z","end":"2024-10-04T02:50:59.807110Z","steps":["trace[954465718] 'process raft request'  (duration: 431.875833ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:50:59.807184Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:50:59.374842Z","time spent":"432.312252ms","remote":"127.0.0.1:51548","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1447,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/endpointslices/ingress-nginx/ingress-nginx-controller-admission-9jckd\" mod_revision:692 > success:<request_put:<key:\"/registry/endpointslices/ingress-nginx/ingress-nginx-controller-admission-9jckd\" value_size:1360 >> failure:<request_range:<key:\"/registry/endpointslices/ingress-nginx/ingress-nginx-controller-admission-9jckd\" > >"}
	{"level":"info","ts":"2024-10-04T02:50:59.807358Z","caller":"traceutil/trace.go:171","msg":"trace[1675267081] transaction","detail":"{read_only:false; response_revision:1091; number_of_response:1; }","duration":"432.422295ms","start":"2024-10-04T02:50:59.374930Z","end":"2024-10-04T02:50:59.807352Z","steps":["trace[1675267081] 'process raft request'  (duration: 431.856105ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:50:59.807412Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:50:59.374924Z","time spent":"432.469956ms","remote":"127.0.0.1:51456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":902,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/ingress-nginx/ingress-nginx-controller-admission\" mod_revision:691 > success:<request_put:<key:\"/registry/services/endpoints/ingress-nginx/ingress-nginx-controller-admission\" value_size:817 >> failure:<request_range:<key:\"/registry/services/endpoints/ingress-nginx/ingress-nginx-controller-admission\" > >"}
	{"level":"info","ts":"2024-10-04T02:50:59.807492Z","caller":"traceutil/trace.go:171","msg":"trace[1764288079] transaction","detail":"{read_only:false; response_revision:1092; number_of_response:1; }","duration":"432.512953ms","start":"2024-10-04T02:50:59.374973Z","end":"2024-10-04T02:50:59.807486Z","steps":["trace[1764288079] 'process raft request'  (duration: 431.879101ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:50:59.807540Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:50:59.374969Z","time spent":"432.553424ms","remote":"127.0.0.1:51548","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1410,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/endpointslices/ingress-nginx/ingress-nginx-controller-qbnmr\" mod_revision:687 > success:<request_put:<key:\"/registry/endpointslices/ingress-nginx/ingress-nginx-controller-qbnmr\" value_size:1333 >> failure:<request_range:<key:\"/registry/endpointslices/ingress-nginx/ingress-nginx-controller-qbnmr\" > >"}
	{"level":"warn","ts":"2024-10-04T02:50:59.807621Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"372.894079ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:50:59.807658Z","caller":"traceutil/trace.go:171","msg":"trace[6222102] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1092; }","duration":"372.930474ms","start":"2024-10-04T02:50:59.434721Z","end":"2024-10-04T02:50:59.807651Z","steps":["trace[6222102] 'agreement among raft nodes before linearized reading'  (duration: 372.880696ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:50:59.807676Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T02:50:59.434669Z","time spent":"373.002459ms","remote":"127.0.0.1:51462","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-04T02:50:59.807833Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.666992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-10-04T02:50:59.807869Z","caller":"traceutil/trace.go:171","msg":"trace[876086469] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1092; }","duration":"264.725792ms","start":"2024-10-04T02:50:59.543135Z","end":"2024-10-04T02:50:59.807861Z","steps":["trace[876086469] 'agreement among raft nodes before linearized reading'  (duration: 264.619832ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T02:51:37.591112Z","caller":"traceutil/trace.go:171","msg":"trace[1664385201] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"237.766427ms","start":"2024-10-04T02:51:37.353306Z","end":"2024-10-04T02:51:37.591072Z","steps":["trace[1664385201] 'process raft request'  (duration: 237.525802ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T02:51:37.592503Z","caller":"traceutil/trace.go:171","msg":"trace[728180221] linearizableReadLoop","detail":"{readStateIndex:1240; appliedIndex:1239; }","duration":"197.260424ms","start":"2024-10-04T02:51:37.395218Z","end":"2024-10-04T02:51:37.592479Z","steps":["trace[728180221] 'read index received'  (duration: 196.335818ms)","trace[728180221] 'applied index is now lower than readState.Index'  (duration: 924.109µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T02:51:37.592742Z","caller":"traceutil/trace.go:171","msg":"trace[296484496] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"218.828676ms","start":"2024-10-04T02:51:37.373905Z","end":"2024-10-04T02:51:37.592733Z","steps":["trace[296484496] 'process raft request'  (duration: 218.517291ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:51:37.593049Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.798234ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.175\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-10-04T02:51:37.593305Z","caller":"traceutil/trace.go:171","msg":"trace[1842154626] range","detail":"{range_begin:/registry/masterleases/192.168.39.175; range_end:; response_count:1; response_revision:1196; }","duration":"198.075614ms","start":"2024-10-04T02:51:37.395213Z","end":"2024-10-04T02:51:37.593289Z","steps":["trace[1842154626] 'agreement among raft nodes before linearized reading'  (duration: 197.736523ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T02:51:37.593477Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.59647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/gadget.kinvolk.io/traces/\" range_end:\"/registry/gadget.kinvolk.io/traces0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T02:51:37.593574Z","caller":"traceutil/trace.go:171","msg":"trace[858644879] range","detail":"{range_begin:/registry/gadget.kinvolk.io/traces/; range_end:/registry/gadget.kinvolk.io/traces0; response_count:0; response_revision:1196; }","duration":"154.705415ms","start":"2024-10-04T02:51:37.438862Z","end":"2024-10-04T02:51:37.593567Z","steps":["trace[858644879] 'agreement among raft nodes before linearized reading'  (duration: 154.584117ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T02:59:23.996296Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1484}
	{"level":"info","ts":"2024-10-04T02:59:24.029616Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1484,"took":"32.158641ms","hash":2557689659,"current-db-size-bytes":5967872,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3010560,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2024-10-04T02:59:24.029691Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2557689659,"revision":1484,"compact-revision":-1}
	{"level":"info","ts":"2024-10-04T03:02:11.292542Z","caller":"traceutil/trace.go:171","msg":"trace[2074514700] transaction","detail":"{read_only:false; response_revision:2622; number_of_response:1; }","duration":"115.006913ms","start":"2024-10-04T03:02:11.177499Z","end":"2024-10-04T03:02:11.292506Z","steps":["trace[2074514700] 'process raft request'  (duration: 114.883696ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T03:04:24.004550Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1902}
	{"level":"info","ts":"2024-10-04T03:04:24.027391Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1902,"took":"21.703857ms","hash":1755759227,"current-db-size-bytes":6090752,"current-db-size":"6.1 MB","current-db-size-in-use-bytes":5054464,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-10-04T03:04:24.027471Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1755759227,"revision":1902,"compact-revision":1484}
	
	
	==> kernel <==
	 03:05:13 up 16 min,  0 users,  load average: 0.19, 0.38, 0.42
	Linux addons-335265 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [01dc9a32ed2252be473b6a4ae9f6df2dc8b65d8e0e1961230f1357f0cda4d714] <==
	E1004 02:51:18.381364       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.57.10:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.57.10:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.57.10:443: connect: connection refused" logger="UnhandledError"
	E1004 02:51:18.383239       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.57.10:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.57.10:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.57.10:443: connect: connection refused" logger="UnhandledError"
	E1004 02:51:18.389037       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.57.10:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.57.10:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.57.10:443: connect: connection refused" logger="UnhandledError"
	I1004 02:51:18.476638       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1004 03:00:49.822738       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1004 03:00:50.448115       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1004 03:00:51.045084       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1004 03:00:52.112854       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1004 03:00:56.589319       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1004 03:00:56.771409       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.121.189"}
	I1004 03:01:21.136198       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:01:21.136311       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:01:21.160896       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:01:21.160959       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:01:21.189707       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:01:21.189766       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:01:21.192928       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:01:21.192981       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 03:01:21.220150       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 03:01:21.220202       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1004 03:01:22.190102       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1004 03:01:22.222069       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1004 03:01:22.282584       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1004 03:01:35.575771       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.155.54"}
	I1004 03:03:20.859169       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.163.107"}
	
	
	==> kube-controller-manager [a0a5d8f06322eea43987c838dfb9b3c0f18ab742d3357eb113609a9784752bf4] <==
	I1004 03:03:24.918425       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I1004 03:03:24.923621       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="9.495µs"
	I1004 03:03:24.928332       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W1004 03:03:25.648877       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:03:25.648995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1004 03:03:33.568920       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-335265"
	I1004 03:03:34.990014       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W1004 03:03:35.776225       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:03:35.776344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:03:43.254230       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:03:43.254475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:03:54.420219       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:03:54.420344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:04:08.900507       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:04:08.900670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:04:17.474932       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:04:17.475110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:04:35.159417       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:04:35.159604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:04:47.011021       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:04:47.011200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:04:55.730597       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:04:55.730712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1004 03:05:09.569887       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 03:05:09.569925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [8f3cc713fb4a168b6df4ffcd7057af0478b65edb2a6bb1e98a6ccf04c070dad3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 02:49:35.459044       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 02:49:35.470353       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.175"]
	E1004 02:49:35.470433       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 02:49:35.567051       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 02:49:35.567095       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 02:49:35.567119       1 server_linux.go:169] "Using iptables Proxier"
	I1004 02:49:35.571219       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 02:49:35.571545       1 server.go:483] "Version info" version="v1.31.1"
	I1004 02:49:35.571576       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 02:49:35.578171       1 config.go:199] "Starting service config controller"
	I1004 02:49:35.578218       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 02:49:35.578314       1 config.go:105] "Starting endpoint slice config controller"
	I1004 02:49:35.578319       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 02:49:35.588871       1 config.go:328] "Starting node config controller"
	I1004 02:49:35.588940       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 02:49:35.679231       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 02:49:35.679339       1 shared_informer.go:320] Caches are synced for service config
	I1004 02:49:35.692342       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ad952c65d22cc95d5aeadd40b7d1fd530cf827d47d8f275a984883f69649b304] <==
	W1004 02:49:26.552910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 02:49:26.553000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.660571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 02:49:26.660658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.698906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 02:49:26.698952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.712346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 02:49:26.712432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.794669       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 02:49:26.794842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.795032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 02:49:26.795093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.899220       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 02:49:26.899320       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1004 02:49:26.930892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 02:49:26.930954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.943962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 02:49:26.944014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.960554       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 02:49:26.961399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.976237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 02:49:26.976332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 02:49:26.999762       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 02:49:26.999902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1004 02:49:29.202807       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 03:03:38 addons-335265 kubelet[1210]: E1004 03:03:38.897920    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011018897483105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:48 addons-335265 kubelet[1210]: E1004 03:03:48.900446    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011028899875692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:48 addons-335265 kubelet[1210]: E1004 03:03:48.900860    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011028899875692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:58 addons-335265 kubelet[1210]: E1004 03:03:58.903664    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011038903066104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:03:58 addons-335265 kubelet[1210]: E1004 03:03:58.903968    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011038903066104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:04:07 addons-335265 kubelet[1210]: I1004 03:04:07.336108    1210 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 03:04:08 addons-335265 kubelet[1210]: E1004 03:04:08.907019    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011048906672707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:04:08 addons-335265 kubelet[1210]: E1004 03:04:08.907066    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011048906672707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:04:18 addons-335265 kubelet[1210]: E1004 03:04:18.910149    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011058909813169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:04:18 addons-335265 kubelet[1210]: E1004 03:04:18.910201    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011058909813169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:04:28 addons-335265 kubelet[1210]: E1004 03:04:28.383237    1210 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:04:28 addons-335265 kubelet[1210]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:04:28 addons-335265 kubelet[1210]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:04:28 addons-335265 kubelet[1210]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:04:28 addons-335265 kubelet[1210]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:04:28 addons-335265 kubelet[1210]: E1004 03:04:28.913518    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011068912961547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:04:28 addons-335265 kubelet[1210]: E1004 03:04:28.913546    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011068912961547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:04:38 addons-335265 kubelet[1210]: E1004 03:04:38.919042    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011078918144767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:04:38 addons-335265 kubelet[1210]: E1004 03:04:38.919103    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011078918144767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:04:48 addons-335265 kubelet[1210]: E1004 03:04:48.922028    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011088921638389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:04:48 addons-335265 kubelet[1210]: E1004 03:04:48.922523    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011088921638389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:04:58 addons-335265 kubelet[1210]: E1004 03:04:58.925803    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011098925085657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:04:58 addons-335265 kubelet[1210]: E1004 03:04:58.925845    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011098925085657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:05:08 addons-335265 kubelet[1210]: E1004 03:05:08.928980    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011108928597310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:05:08 addons-335265 kubelet[1210]: E1004 03:05:08.929053    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728011108928597310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593700,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [70fde3be5e7a73c8d3397f8689977a297aacafe17392798a26c4e03f5f106932] <==
	I1004 02:49:40.906666       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 02:49:41.749565       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 02:49:41.749631       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 02:49:42.145909       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 02:49:42.146108       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-335265_44b73c5d-ab93-44e0-a85b-b47a1860d5db!
	I1004 02:49:42.191335       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74e1309f-d5d8-4d08-a932-f554ffb03b94", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-335265_44b73c5d-ab93-44e0-a85b-b47a1860d5db became leader
	I1004 02:49:42.448798       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-335265_44b73c5d-ab93-44e0-a85b-b47a1860d5db!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-335265 -n addons-335265
helpers_test.go:261: (dbg) Run:  kubectl --context addons-335265 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:990: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (294.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-335265
addons_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-335265: exit status 82 (2m0.469498886s)

                                                
                                                
-- stdout --
	* Stopping node "addons-335265"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:173: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-335265" : exit status 82
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-335265
addons_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-335265: exit status 11 (21.665739715s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.175:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:177: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-335265" : exit status 11
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-335265
addons_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-335265: exit status 11 (6.143499405s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.175:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:181: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-335265" : exit status 11
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-335265
addons_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-335265: exit status 11 (6.144937236s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.175:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:186: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-335265" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 node stop m02 -v=7 --alsologtostderr
E1004 03:22:55.991222   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:23:36.952883   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-994751 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.469117892s)

                                                
                                                
-- stdout --
	* Stopping node "ha-994751-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:22:45.169867   34689 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:22:45.170219   34689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:22:45.170232   34689 out.go:358] Setting ErrFile to fd 2...
	I1004 03:22:45.170238   34689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:22:45.170489   34689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 03:22:45.170745   34689 mustload.go:65] Loading cluster: ha-994751
	I1004 03:22:45.171121   34689 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:22:45.171136   34689 stop.go:39] StopHost: ha-994751-m02
	I1004 03:22:45.171487   34689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:22:45.171535   34689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:22:45.186503   34689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34487
	I1004 03:22:45.186918   34689 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:22:45.187532   34689 main.go:141] libmachine: Using API Version  1
	I1004 03:22:45.187554   34689 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:22:45.187975   34689 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:22:45.190920   34689 out.go:177] * Stopping node "ha-994751-m02"  ...
	I1004 03:22:45.192322   34689 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1004 03:22:45.192373   34689 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:22:45.192620   34689 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1004 03:22:45.192648   34689 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:22:45.195754   34689 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:22:45.196227   34689 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:22:45.196259   34689 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:22:45.196444   34689 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:22:45.196610   34689 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:22:45.196755   34689 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:22:45.196897   34689 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:22:45.282173   34689 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1004 03:22:45.338174   34689 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1004 03:22:45.392475   34689 main.go:141] libmachine: Stopping "ha-994751-m02"...
	I1004 03:22:45.392498   34689 main.go:141] libmachine: (ha-994751-m02) Calling .GetState
	I1004 03:22:45.393977   34689 main.go:141] libmachine: (ha-994751-m02) Calling .Stop
	I1004 03:22:45.397642   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 0/120
	I1004 03:22:46.399307   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 1/120
	I1004 03:22:47.401435   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 2/120
	I1004 03:22:48.402572   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 3/120
	I1004 03:22:49.403944   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 4/120
	I1004 03:22:50.406043   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 5/120
	I1004 03:22:51.407512   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 6/120
	I1004 03:22:52.408676   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 7/120
	I1004 03:22:53.410499   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 8/120
	I1004 03:22:54.411966   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 9/120
	I1004 03:22:55.414348   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 10/120
	I1004 03:22:56.415541   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 11/120
	I1004 03:22:57.416926   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 12/120
	I1004 03:22:58.418146   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 13/120
	I1004 03:22:59.419386   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 14/120
	I1004 03:23:00.421133   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 15/120
	I1004 03:23:01.422330   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 16/120
	I1004 03:23:02.423641   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 17/120
	I1004 03:23:03.425135   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 18/120
	I1004 03:23:04.426547   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 19/120
	I1004 03:23:05.428426   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 20/120
	I1004 03:23:06.429716   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 21/120
	I1004 03:23:07.432135   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 22/120
	I1004 03:23:08.434492   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 23/120
	I1004 03:23:09.436127   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 24/120
	I1004 03:23:10.438081   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 25/120
	I1004 03:23:11.439439   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 26/120
	I1004 03:23:12.440921   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 27/120
	I1004 03:23:13.442588   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 28/120
	I1004 03:23:14.444040   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 29/120
	I1004 03:23:15.446101   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 30/120
	I1004 03:23:16.447316   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 31/120
	I1004 03:23:17.448728   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 32/120
	I1004 03:23:18.450181   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 33/120
	I1004 03:23:19.451771   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 34/120
	I1004 03:23:20.453760   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 35/120
	I1004 03:23:21.455184   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 36/120
	I1004 03:23:22.456667   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 37/120
	I1004 03:23:23.458765   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 38/120
	I1004 03:23:24.460413   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 39/120
	I1004 03:23:25.462294   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 40/120
	I1004 03:23:26.463932   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 41/120
	I1004 03:23:27.466370   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 42/120
	I1004 03:23:28.467817   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 43/120
	I1004 03:23:29.469341   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 44/120
	I1004 03:23:30.471267   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 45/120
	I1004 03:23:31.472618   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 46/120
	I1004 03:23:32.473944   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 47/120
	I1004 03:23:33.475638   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 48/120
	I1004 03:23:34.477091   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 49/120
	I1004 03:23:35.479292   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 50/120
	I1004 03:23:36.481165   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 51/120
	I1004 03:23:37.483376   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 52/120
	I1004 03:23:38.484857   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 53/120
	I1004 03:23:39.486476   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 54/120
	I1004 03:23:40.488333   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 55/120
	I1004 03:23:41.489649   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 56/120
	I1004 03:23:42.491072   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 57/120
	I1004 03:23:43.492469   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 58/120
	I1004 03:23:44.494246   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 59/120
	I1004 03:23:45.496560   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 60/120
	I1004 03:23:46.498711   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 61/120
	I1004 03:23:47.499974   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 62/120
	I1004 03:23:48.501694   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 63/120
	I1004 03:23:49.503052   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 64/120
	I1004 03:23:50.504760   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 65/120
	I1004 03:23:51.506505   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 66/120
	I1004 03:23:52.508124   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 67/120
	I1004 03:23:53.510475   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 68/120
	I1004 03:23:54.512009   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 69/120
	I1004 03:23:55.513585   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 70/120
	I1004 03:23:56.515212   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 71/120
	I1004 03:23:57.517090   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 72/120
	I1004 03:23:58.518522   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 73/120
	I1004 03:23:59.520274   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 74/120
	I1004 03:24:00.522455   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 75/120
	I1004 03:24:01.524151   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 76/120
	I1004 03:24:02.526444   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 77/120
	I1004 03:24:03.528038   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 78/120
	I1004 03:24:04.530540   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 79/120
	I1004 03:24:05.532794   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 80/120
	I1004 03:24:06.534430   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 81/120
	I1004 03:24:07.536173   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 82/120
	I1004 03:24:08.537580   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 83/120
	I1004 03:24:09.538971   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 84/120
	I1004 03:24:10.540852   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 85/120
	I1004 03:24:11.542433   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 86/120
	I1004 03:24:12.543851   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 87/120
	I1004 03:24:13.545231   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 88/120
	I1004 03:24:14.546747   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 89/120
	I1004 03:24:15.548230   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 90/120
	I1004 03:24:16.550323   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 91/120
	I1004 03:24:17.551805   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 92/120
	I1004 03:24:18.553212   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 93/120
	I1004 03:24:19.554340   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 94/120
	I1004 03:24:20.556569   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 95/120
	I1004 03:24:21.557831   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 96/120
	I1004 03:24:22.559317   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 97/120
	I1004 03:24:23.560748   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 98/120
	I1004 03:24:24.562078   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 99/120
	I1004 03:24:25.563544   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 100/120
	I1004 03:24:26.565670   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 101/120
	I1004 03:24:27.567620   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 102/120
	I1004 03:24:28.568915   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 103/120
	I1004 03:24:29.570542   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 104/120
	I1004 03:24:30.572380   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 105/120
	I1004 03:24:31.574299   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 106/120
	I1004 03:24:32.575446   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 107/120
	I1004 03:24:33.577035   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 108/120
	I1004 03:24:34.578370   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 109/120
	I1004 03:24:35.580456   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 110/120
	I1004 03:24:36.581631   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 111/120
	I1004 03:24:37.582957   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 112/120
	I1004 03:24:38.584473   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 113/120
	I1004 03:24:39.586354   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 114/120
	I1004 03:24:40.588597   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 115/120
	I1004 03:24:41.590032   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 116/120
	I1004 03:24:42.591427   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 117/120
	I1004 03:24:43.593352   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 118/120
	I1004 03:24:44.594967   34689 main.go:141] libmachine: (ha-994751-m02) Waiting for machine to stop 119/120
	I1004 03:24:45.595796   34689 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1004 03:24:45.595933   34689 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-994751 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr
E1004 03:24:58.876327   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr: (18.855836183s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-994751 -n ha-994751
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-994751 logs -n 25: (1.530785933s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1363640037/001/cp-test_ha-994751-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751:/home/docker/cp-test_ha-994751-m03_ha-994751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751 sudo cat                                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m02:/home/docker/cp-test_ha-994751-m03_ha-994751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m02 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04:/home/docker/cp-test_ha-994751-m03_ha-994751-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m04 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp testdata/cp-test.txt                                                | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1363640037/001/cp-test_ha-994751-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751:/home/docker/cp-test_ha-994751-m04_ha-994751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751 sudo cat                                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m02:/home/docker/cp-test_ha-994751-m04_ha-994751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m02 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03:/home/docker/cp-test_ha-994751-m04_ha-994751-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m03 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-994751 node stop m02 -v=7                                                     | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 03:18:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 03:18:05.722757   30630 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:18:05.722861   30630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:18:05.722866   30630 out.go:358] Setting ErrFile to fd 2...
	I1004 03:18:05.722871   30630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:18:05.723051   30630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 03:18:05.723672   30630 out.go:352] Setting JSON to false
	I1004 03:18:05.724646   30630 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3631,"bootTime":1728008255,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 03:18:05.724743   30630 start.go:139] virtualization: kvm guest
	I1004 03:18:05.726903   30630 out.go:177] * [ha-994751] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 03:18:05.728435   30630 notify.go:220] Checking for updates...
	I1004 03:18:05.728459   30630 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:18:05.730163   30630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:18:05.731580   30630 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:18:05.733048   30630 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:05.734449   30630 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 03:18:05.735914   30630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:18:05.737675   30630 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:18:05.774405   30630 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 03:18:05.775959   30630 start.go:297] selected driver: kvm2
	I1004 03:18:05.775980   30630 start.go:901] validating driver "kvm2" against <nil>
	I1004 03:18:05.775993   30630 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:18:05.776759   30630 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:18:05.776855   30630 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 03:18:05.791915   30630 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 03:18:05.791974   30630 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 03:18:05.792218   30630 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:18:05.792245   30630 cni.go:84] Creating CNI manager for ""
	I1004 03:18:05.792281   30630 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1004 03:18:05.792289   30630 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 03:18:05.792342   30630 start.go:340] cluster config:
	{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1004 03:18:05.792429   30630 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:18:05.794321   30630 out.go:177] * Starting "ha-994751" primary control-plane node in "ha-994751" cluster
	I1004 03:18:05.795797   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:18:05.795855   30630 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 03:18:05.795867   30630 cache.go:56] Caching tarball of preloaded images
	I1004 03:18:05.795948   30630 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:18:05.795958   30630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:18:05.796250   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:05.796278   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json: {Name:mk8f786fa93ab6935652e46df2caeb1892ffd1fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:05.796426   30630 start.go:360] acquireMachinesLock for ha-994751: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:18:05.796455   30630 start.go:364] duration metric: took 15.921µs to acquireMachinesLock for "ha-994751"
	I1004 03:18:05.796470   30630 start.go:93] Provisioning new machine with config: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:18:05.796525   30630 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 03:18:05.798287   30630 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 03:18:05.798440   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:05.798475   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:05.812686   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34047
	I1004 03:18:05.813143   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:05.813678   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:05.813709   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:05.814066   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:05.814254   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:05.814407   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:05.814549   30630 start.go:159] libmachine.API.Create for "ha-994751" (driver="kvm2")
	I1004 03:18:05.814572   30630 client.go:168] LocalClient.Create starting
	I1004 03:18:05.814612   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 03:18:05.814645   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:05.814661   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:05.814721   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 03:18:05.814738   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:05.814750   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:05.814764   30630 main.go:141] libmachine: Running pre-create checks...
	I1004 03:18:05.814779   30630 main.go:141] libmachine: (ha-994751) Calling .PreCreateCheck
	I1004 03:18:05.815056   30630 main.go:141] libmachine: (ha-994751) Calling .GetConfigRaw
	I1004 03:18:05.815402   30630 main.go:141] libmachine: Creating machine...
	I1004 03:18:05.815413   30630 main.go:141] libmachine: (ha-994751) Calling .Create
	I1004 03:18:05.815566   30630 main.go:141] libmachine: (ha-994751) Creating KVM machine...
	I1004 03:18:05.816861   30630 main.go:141] libmachine: (ha-994751) DBG | found existing default KVM network
	I1004 03:18:05.817536   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:05.817406   30653 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1004 03:18:05.817563   30630 main.go:141] libmachine: (ha-994751) DBG | created network xml: 
	I1004 03:18:05.817586   30630 main.go:141] libmachine: (ha-994751) DBG | <network>
	I1004 03:18:05.817592   30630 main.go:141] libmachine: (ha-994751) DBG |   <name>mk-ha-994751</name>
	I1004 03:18:05.817597   30630 main.go:141] libmachine: (ha-994751) DBG |   <dns enable='no'/>
	I1004 03:18:05.817602   30630 main.go:141] libmachine: (ha-994751) DBG |   
	I1004 03:18:05.817610   30630 main.go:141] libmachine: (ha-994751) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1004 03:18:05.817615   30630 main.go:141] libmachine: (ha-994751) DBG |     <dhcp>
	I1004 03:18:05.817621   30630 main.go:141] libmachine: (ha-994751) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1004 03:18:05.817629   30630 main.go:141] libmachine: (ha-994751) DBG |     </dhcp>
	I1004 03:18:05.817644   30630 main.go:141] libmachine: (ha-994751) DBG |   </ip>
	I1004 03:18:05.817652   30630 main.go:141] libmachine: (ha-994751) DBG |   
	I1004 03:18:05.817659   30630 main.go:141] libmachine: (ha-994751) DBG | </network>
	I1004 03:18:05.817668   30630 main.go:141] libmachine: (ha-994751) DBG | 
	I1004 03:18:05.823178   30630 main.go:141] libmachine: (ha-994751) DBG | trying to create private KVM network mk-ha-994751 192.168.39.0/24...
	I1004 03:18:05.886885   30630 main.go:141] libmachine: (ha-994751) DBG | private KVM network mk-ha-994751 192.168.39.0/24 created
	I1004 03:18:05.886925   30630 main.go:141] libmachine: (ha-994751) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751 ...
	I1004 03:18:05.886940   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:05.886875   30653 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:05.886958   30630 main.go:141] libmachine: (ha-994751) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 03:18:05.887024   30630 main.go:141] libmachine: (ha-994751) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 03:18:06.142449   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:06.142299   30653 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa...
	I1004 03:18:06.210635   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:06.210526   30653 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/ha-994751.rawdisk...
	I1004 03:18:06.210664   30630 main.go:141] libmachine: (ha-994751) DBG | Writing magic tar header
	I1004 03:18:06.210677   30630 main.go:141] libmachine: (ha-994751) DBG | Writing SSH key tar header
	I1004 03:18:06.210688   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:06.210638   30653 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751 ...
	I1004 03:18:06.210755   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751
	I1004 03:18:06.210796   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751 (perms=drwx------)
	I1004 03:18:06.210813   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 03:18:06.210829   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:06.210837   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 03:18:06.210844   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 03:18:06.210850   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins
	I1004 03:18:06.210857   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 03:18:06.210924   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 03:18:06.210944   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 03:18:06.210949   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home
	I1004 03:18:06.210957   30630 main.go:141] libmachine: (ha-994751) DBG | Skipping /home - not owner
	I1004 03:18:06.210976   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 03:18:06.210990   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 03:18:06.210999   30630 main.go:141] libmachine: (ha-994751) Creating domain...
	I1004 03:18:06.212079   30630 main.go:141] libmachine: (ha-994751) define libvirt domain using xml: 
	I1004 03:18:06.212103   30630 main.go:141] libmachine: (ha-994751) <domain type='kvm'>
	I1004 03:18:06.212112   30630 main.go:141] libmachine: (ha-994751)   <name>ha-994751</name>
	I1004 03:18:06.212118   30630 main.go:141] libmachine: (ha-994751)   <memory unit='MiB'>2200</memory>
	I1004 03:18:06.212126   30630 main.go:141] libmachine: (ha-994751)   <vcpu>2</vcpu>
	I1004 03:18:06.212132   30630 main.go:141] libmachine: (ha-994751)   <features>
	I1004 03:18:06.212140   30630 main.go:141] libmachine: (ha-994751)     <acpi/>
	I1004 03:18:06.212152   30630 main.go:141] libmachine: (ha-994751)     <apic/>
	I1004 03:18:06.212164   30630 main.go:141] libmachine: (ha-994751)     <pae/>
	I1004 03:18:06.212177   30630 main.go:141] libmachine: (ha-994751)     
	I1004 03:18:06.212187   30630 main.go:141] libmachine: (ha-994751)   </features>
	I1004 03:18:06.212192   30630 main.go:141] libmachine: (ha-994751)   <cpu mode='host-passthrough'>
	I1004 03:18:06.212196   30630 main.go:141] libmachine: (ha-994751)   
	I1004 03:18:06.212200   30630 main.go:141] libmachine: (ha-994751)   </cpu>
	I1004 03:18:06.212204   30630 main.go:141] libmachine: (ha-994751)   <os>
	I1004 03:18:06.212210   30630 main.go:141] libmachine: (ha-994751)     <type>hvm</type>
	I1004 03:18:06.212215   30630 main.go:141] libmachine: (ha-994751)     <boot dev='cdrom'/>
	I1004 03:18:06.212228   30630 main.go:141] libmachine: (ha-994751)     <boot dev='hd'/>
	I1004 03:18:06.212253   30630 main.go:141] libmachine: (ha-994751)     <bootmenu enable='no'/>
	I1004 03:18:06.212268   30630 main.go:141] libmachine: (ha-994751)   </os>
	I1004 03:18:06.212286   30630 main.go:141] libmachine: (ha-994751)   <devices>
	I1004 03:18:06.212296   30630 main.go:141] libmachine: (ha-994751)     <disk type='file' device='cdrom'>
	I1004 03:18:06.212309   30630 main.go:141] libmachine: (ha-994751)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/boot2docker.iso'/>
	I1004 03:18:06.212319   30630 main.go:141] libmachine: (ha-994751)       <target dev='hdc' bus='scsi'/>
	I1004 03:18:06.212330   30630 main.go:141] libmachine: (ha-994751)       <readonly/>
	I1004 03:18:06.212334   30630 main.go:141] libmachine: (ha-994751)     </disk>
	I1004 03:18:06.212342   30630 main.go:141] libmachine: (ha-994751)     <disk type='file' device='disk'>
	I1004 03:18:06.212354   30630 main.go:141] libmachine: (ha-994751)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 03:18:06.212370   30630 main.go:141] libmachine: (ha-994751)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/ha-994751.rawdisk'/>
	I1004 03:18:06.212380   30630 main.go:141] libmachine: (ha-994751)       <target dev='hda' bus='virtio'/>
	I1004 03:18:06.212388   30630 main.go:141] libmachine: (ha-994751)     </disk>
	I1004 03:18:06.212397   30630 main.go:141] libmachine: (ha-994751)     <interface type='network'>
	I1004 03:18:06.212406   30630 main.go:141] libmachine: (ha-994751)       <source network='mk-ha-994751'/>
	I1004 03:18:06.212415   30630 main.go:141] libmachine: (ha-994751)       <model type='virtio'/>
	I1004 03:18:06.212440   30630 main.go:141] libmachine: (ha-994751)     </interface>
	I1004 03:18:06.212460   30630 main.go:141] libmachine: (ha-994751)     <interface type='network'>
	I1004 03:18:06.212467   30630 main.go:141] libmachine: (ha-994751)       <source network='default'/>
	I1004 03:18:06.212471   30630 main.go:141] libmachine: (ha-994751)       <model type='virtio'/>
	I1004 03:18:06.212479   30630 main.go:141] libmachine: (ha-994751)     </interface>
	I1004 03:18:06.212494   30630 main.go:141] libmachine: (ha-994751)     <serial type='pty'>
	I1004 03:18:06.212502   30630 main.go:141] libmachine: (ha-994751)       <target port='0'/>
	I1004 03:18:06.212508   30630 main.go:141] libmachine: (ha-994751)     </serial>
	I1004 03:18:06.212516   30630 main.go:141] libmachine: (ha-994751)     <console type='pty'>
	I1004 03:18:06.212520   30630 main.go:141] libmachine: (ha-994751)       <target type='serial' port='0'/>
	I1004 03:18:06.212542   30630 main.go:141] libmachine: (ha-994751)     </console>
	I1004 03:18:06.212560   30630 main.go:141] libmachine: (ha-994751)     <rng model='virtio'>
	I1004 03:18:06.212574   30630 main.go:141] libmachine: (ha-994751)       <backend model='random'>/dev/random</backend>
	I1004 03:18:06.212585   30630 main.go:141] libmachine: (ha-994751)     </rng>
	I1004 03:18:06.212593   30630 main.go:141] libmachine: (ha-994751)     
	I1004 03:18:06.212602   30630 main.go:141] libmachine: (ha-994751)     
	I1004 03:18:06.212610   30630 main.go:141] libmachine: (ha-994751)   </devices>
	I1004 03:18:06.212618   30630 main.go:141] libmachine: (ha-994751) </domain>
	I1004 03:18:06.212627   30630 main.go:141] libmachine: (ha-994751) 
	I1004 03:18:06.216801   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:e9:7d:48 in network default
	I1004 03:18:06.217289   30630 main.go:141] libmachine: (ha-994751) Ensuring networks are active...
	I1004 03:18:06.217308   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:06.217978   30630 main.go:141] libmachine: (ha-994751) Ensuring network default is active
	I1004 03:18:06.218330   30630 main.go:141] libmachine: (ha-994751) Ensuring network mk-ha-994751 is active
	I1004 03:18:06.218792   30630 main.go:141] libmachine: (ha-994751) Getting domain xml...
	I1004 03:18:06.219458   30630 main.go:141] libmachine: (ha-994751) Creating domain...
	I1004 03:18:07.407094   30630 main.go:141] libmachine: (ha-994751) Waiting to get IP...
	I1004 03:18:07.407817   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:07.408229   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:07.408273   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:07.408187   30653 retry.go:31] will retry after 265.096314ms: waiting for machine to come up
	I1004 03:18:07.674734   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:07.675129   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:07.675155   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:07.675076   30653 retry.go:31] will retry after 390.620211ms: waiting for machine to come up
	I1004 03:18:08.067622   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:08.068086   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:08.068114   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:08.068031   30653 retry.go:31] will retry after 362.909556ms: waiting for machine to come up
	I1004 03:18:08.432460   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:08.432888   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:08.432909   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:08.432822   30653 retry.go:31] will retry after 609.869022ms: waiting for machine to come up
	I1004 03:18:09.044728   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:09.045180   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:09.045206   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:09.045129   30653 retry.go:31] will retry after 721.849297ms: waiting for machine to come up
	I1004 03:18:09.769005   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:09.769517   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:09.769542   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:09.769465   30653 retry.go:31] will retry after 920.066652ms: waiting for machine to come up
	I1004 03:18:10.691477   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:10.691934   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:10.691982   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:10.691880   30653 retry.go:31] will retry after 915.375779ms: waiting for machine to come up
	I1004 03:18:11.608614   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:11.609000   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:11.609026   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:11.608956   30653 retry.go:31] will retry after 1.213056064s: waiting for machine to come up
	I1004 03:18:12.823425   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:12.823843   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:12.823863   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:12.823799   30653 retry.go:31] will retry after 1.167496597s: waiting for machine to come up
	I1004 03:18:13.993222   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:13.993651   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:13.993670   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:13.993625   30653 retry.go:31] will retry after 1.774059142s: waiting for machine to come up
	I1004 03:18:15.769014   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:15.769477   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:15.769521   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:15.769420   30653 retry.go:31] will retry after 2.081580382s: waiting for machine to come up
	I1004 03:18:17.853131   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:17.853479   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:17.853503   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:17.853441   30653 retry.go:31] will retry after 3.090115259s: waiting for machine to come up
	I1004 03:18:20.945030   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:20.945469   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:20.945493   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:20.945409   30653 retry.go:31] will retry after 4.314609333s: waiting for machine to come up
	I1004 03:18:25.264846   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:25.265316   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:25.265335   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:25.265278   30653 retry.go:31] will retry after 4.302479318s: waiting for machine to come up
	I1004 03:18:29.572575   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.572946   30630 main.go:141] libmachine: (ha-994751) Found IP for machine: 192.168.39.65
	I1004 03:18:29.572975   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has current primary IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.572983   30630 main.go:141] libmachine: (ha-994751) Reserving static IP address...
	I1004 03:18:29.573371   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find host DHCP lease matching {name: "ha-994751", mac: "52:54:00:9b:b2:a8", ip: "192.168.39.65"} in network mk-ha-994751
	I1004 03:18:29.642317   30630 main.go:141] libmachine: (ha-994751) DBG | Getting to WaitForSSH function...
	I1004 03:18:29.642344   30630 main.go:141] libmachine: (ha-994751) Reserved static IP address: 192.168.39.65
	I1004 03:18:29.642356   30630 main.go:141] libmachine: (ha-994751) Waiting for SSH to be available...
	I1004 03:18:29.644819   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.645174   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.645189   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.645350   30630 main.go:141] libmachine: (ha-994751) DBG | Using SSH client type: external
	I1004 03:18:29.645373   30630 main.go:141] libmachine: (ha-994751) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa (-rw-------)
	I1004 03:18:29.645433   30630 main.go:141] libmachine: (ha-994751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:18:29.645459   30630 main.go:141] libmachine: (ha-994751) DBG | About to run SSH command:
	I1004 03:18:29.645475   30630 main.go:141] libmachine: (ha-994751) DBG | exit 0
	I1004 03:18:29.768066   30630 main.go:141] libmachine: (ha-994751) DBG | SSH cmd err, output: <nil>: 
	I1004 03:18:29.768301   30630 main.go:141] libmachine: (ha-994751) KVM machine creation complete!
	I1004 03:18:29.768621   30630 main.go:141] libmachine: (ha-994751) Calling .GetConfigRaw
	I1004 03:18:29.769131   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:29.769285   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:29.769480   30630 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 03:18:29.769497   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:29.770831   30630 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 03:18:29.770850   30630 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 03:18:29.770858   30630 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 03:18:29.770868   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:29.772990   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.773299   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.773321   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.773460   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:29.773635   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.773787   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.773964   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:29.774099   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:29.774324   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:29.774336   30630 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 03:18:29.870824   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:18:29.870852   30630 main.go:141] libmachine: Detecting the provisioner...
	I1004 03:18:29.870864   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:29.873067   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.873430   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.873464   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.873650   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:29.873816   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.873947   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.874038   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:29.874214   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:29.874367   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:29.874377   30630 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 03:18:29.972554   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 03:18:29.972627   30630 main.go:141] libmachine: found compatible host: buildroot
	I1004 03:18:29.972634   30630 main.go:141] libmachine: Provisioning with buildroot...
	I1004 03:18:29.972640   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:29.972883   30630 buildroot.go:166] provisioning hostname "ha-994751"
	I1004 03:18:29.972906   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:29.973092   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:29.975627   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.976040   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.976059   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.976197   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:29.976336   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.976489   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.976626   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:29.976745   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:29.976951   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:29.976969   30630 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-994751 && echo "ha-994751" | sudo tee /etc/hostname
	I1004 03:18:30.090454   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751
	
	I1004 03:18:30.090480   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.094372   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.094783   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.094812   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.094993   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.095167   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.095331   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.095446   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.095586   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:30.095799   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:30.095822   30630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-994751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-994751/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-994751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:18:30.200998   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:18:30.201031   30630 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:18:30.201106   30630 buildroot.go:174] setting up certificates
	I1004 03:18:30.201120   30630 provision.go:84] configureAuth start
	I1004 03:18:30.201131   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:30.201353   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:30.203920   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.204369   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.204390   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.204563   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.206770   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.207168   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.207195   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.207325   30630 provision.go:143] copyHostCerts
	I1004 03:18:30.207355   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:18:30.207398   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:18:30.207407   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:18:30.207474   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:18:30.207553   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:18:30.207574   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:18:30.207581   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:18:30.207605   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:18:30.207644   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:18:30.207661   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:18:30.207671   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:18:30.207691   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:18:30.207739   30630 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.ha-994751 san=[127.0.0.1 192.168.39.65 ha-994751 localhost minikube]
	I1004 03:18:30.399105   30630 provision.go:177] copyRemoteCerts
	I1004 03:18:30.399156   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:18:30.399185   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.401949   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.402239   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.402273   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.402458   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.402612   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.402732   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.402824   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:30.481271   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:18:30.481342   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:18:30.505491   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:18:30.505567   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:18:30.528533   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:18:30.528602   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1004 03:18:30.551611   30630 provision.go:87] duration metric: took 350.480163ms to configureAuth
	I1004 03:18:30.551641   30630 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:18:30.551807   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:18:30.551909   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.554312   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.554641   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.554668   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.554833   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.554998   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.555138   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.555257   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.555398   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:30.555570   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:30.555585   30630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:18:30.762357   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:18:30.762381   30630 main.go:141] libmachine: Checking connection to Docker...
	I1004 03:18:30.762388   30630 main.go:141] libmachine: (ha-994751) Calling .GetURL
	I1004 03:18:30.763606   30630 main.go:141] libmachine: (ha-994751) DBG | Using libvirt version 6000000
	I1004 03:18:30.765692   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.766020   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.766048   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.766206   30630 main.go:141] libmachine: Docker is up and running!
	I1004 03:18:30.766228   30630 main.go:141] libmachine: Reticulating splines...
	I1004 03:18:30.766236   30630 client.go:171] duration metric: took 24.951657625s to LocalClient.Create
	I1004 03:18:30.766258   30630 start.go:167] duration metric: took 24.951708327s to libmachine.API.Create "ha-994751"
	I1004 03:18:30.766279   30630 start.go:293] postStartSetup for "ha-994751" (driver="kvm2")
	I1004 03:18:30.766291   30630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:18:30.766310   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.766550   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:18:30.766573   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.768581   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.768893   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.768918   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.769018   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.769215   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.769374   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.769501   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:30.850107   30630 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:18:30.854350   30630 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:18:30.854372   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:18:30.854448   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:18:30.854554   30630 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:18:30.854567   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:18:30.854687   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:18:30.863939   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:18:30.887968   30630 start.go:296] duration metric: took 121.677235ms for postStartSetup
	I1004 03:18:30.888032   30630 main.go:141] libmachine: (ha-994751) Calling .GetConfigRaw
	I1004 03:18:30.888647   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:30.891188   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.891538   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.891578   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.891766   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:30.891959   30630 start.go:128] duration metric: took 25.095424862s to createHost
	I1004 03:18:30.891980   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.894352   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.894614   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.894640   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.894753   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.894910   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.895041   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.895137   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.895264   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:30.895466   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:30.895480   30630 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:18:30.992599   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011910.970126057
	
	I1004 03:18:30.992618   30630 fix.go:216] guest clock: 1728011910.970126057
	I1004 03:18:30.992625   30630 fix.go:229] Guest: 2024-10-04 03:18:30.970126057 +0000 UTC Remote: 2024-10-04 03:18:30.89197094 +0000 UTC m=+25.204801944 (delta=78.155117ms)
	I1004 03:18:30.992662   30630 fix.go:200] guest clock delta is within tolerance: 78.155117ms
	I1004 03:18:30.992667   30630 start.go:83] releasing machines lock for "ha-994751", held for 25.19620396s
	I1004 03:18:30.992685   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.992896   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:30.995326   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.995629   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.995653   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.995813   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.996311   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.996458   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.996541   30630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:18:30.996578   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.996668   30630 ssh_runner.go:195] Run: cat /version.json
	I1004 03:18:30.996687   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.999188   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999227   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999574   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.999599   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999648   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.999673   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999727   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.999923   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.999936   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:31.000065   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:31.000137   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:31.000197   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:31.000242   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:31.000338   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:31.092724   30630 ssh_runner.go:195] Run: systemctl --version
	I1004 03:18:31.098738   30630 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:18:31.257592   30630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 03:18:31.263326   30630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:18:31.263402   30630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:18:31.278780   30630 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 03:18:31.278800   30630 start.go:495] detecting cgroup driver to use...
	I1004 03:18:31.278866   30630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:18:31.295874   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:18:31.310006   30630 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:18:31.310076   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:18:31.323189   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:18:31.336586   30630 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:18:31.452424   30630 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:18:31.611505   30630 docker.go:233] disabling docker service ...
	I1004 03:18:31.611576   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:18:31.625795   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:18:31.640666   30630 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:18:31.774429   30630 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:18:31.903530   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:18:31.917157   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:18:31.935039   30630 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:18:31.935118   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.945550   30630 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:18:31.945617   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.955961   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.966381   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.976764   30630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:18:31.987308   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.997608   30630 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:32.014334   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:32.025406   30630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:18:32.035105   30630 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 03:18:32.035157   30630 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 03:18:32.048803   30630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:18:32.058421   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:18:32.175897   30630 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:18:32.272377   30630 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:18:32.272435   30630 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:18:32.277743   30630 start.go:563] Will wait 60s for crictl version
	I1004 03:18:32.277805   30630 ssh_runner.go:195] Run: which crictl
	I1004 03:18:32.281362   30630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:18:32.318848   30630 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:18:32.318925   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:18:32.346909   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:18:32.375477   30630 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:18:32.376825   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:32.379208   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:32.379571   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:32.379594   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:32.379801   30630 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:18:32.384207   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:18:32.397053   30630 kubeadm.go:883] updating cluster {Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 03:18:32.397153   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:18:32.397223   30630 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:18:32.434648   30630 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 03:18:32.434703   30630 ssh_runner.go:195] Run: which lz4
	I1004 03:18:32.438603   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1004 03:18:32.438682   30630 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 03:18:32.442788   30630 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 03:18:32.442821   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 03:18:33.747633   30630 crio.go:462] duration metric: took 1.308983475s to copy over tarball
	I1004 03:18:33.747699   30630 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 03:18:35.713127   30630 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.965391744s)
	I1004 03:18:35.713157   30630 crio.go:469] duration metric: took 1.965495286s to extract the tarball
	I1004 03:18:35.713167   30630 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 03:18:35.749886   30630 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:18:35.795226   30630 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:18:35.795249   30630 cache_images.go:84] Images are preloaded, skipping loading
	I1004 03:18:35.795257   30630 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.31.1 crio true true} ...
	I1004 03:18:35.795346   30630 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-994751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:18:35.795408   30630 ssh_runner.go:195] Run: crio config
	I1004 03:18:35.841695   30630 cni.go:84] Creating CNI manager for ""
	I1004 03:18:35.841718   30630 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1004 03:18:35.841728   30630 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 03:18:35.841746   30630 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-994751 NodeName:ha-994751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 03:18:35.841868   30630 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-994751"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 03:18:35.841893   30630 kube-vip.go:115] generating kube-vip config ...
	I1004 03:18:35.841933   30630 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1004 03:18:35.858111   30630 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1004 03:18:35.858218   30630 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:18:35.858274   30630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:18:35.867809   30630 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 03:18:35.867872   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1004 03:18:35.876830   30630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1004 03:18:35.892172   30630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:18:35.907631   30630 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1004 03:18:35.923147   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1004 03:18:35.939242   30630 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:18:35.943241   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:18:35.955036   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:18:36.063830   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:18:36.080131   30630 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751 for IP: 192.168.39.65
	I1004 03:18:36.080153   30630 certs.go:194] generating shared ca certs ...
	I1004 03:18:36.080169   30630 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.080303   30630 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:18:36.080336   30630 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:18:36.080345   30630 certs.go:256] generating profile certs ...
	I1004 03:18:36.080388   30630 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key
	I1004 03:18:36.080414   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt with IP's: []
	I1004 03:18:36.205325   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt ...
	I1004 03:18:36.205354   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt: {Name:mk097459d54d355cf05d74a196b72b51ed16216c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.205539   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key ...
	I1004 03:18:36.205553   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key: {Name:mka6efef398570320df79b26ee2d84116b88400b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.205628   30630 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35
	I1004 03:18:36.205642   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65 192.168.39.254]
	I1004 03:18:36.278398   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35 ...
	I1004 03:18:36.278426   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35: {Name:mk5a54fedcb658e02d5a59c4cc7f959d0efc3b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.278574   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35 ...
	I1004 03:18:36.278586   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35: {Name:mk30bcb47c9e314eff3c9b6a3bb1c1b8ba019417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.278653   30630 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt
	I1004 03:18:36.278741   30630 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key
	I1004 03:18:36.278802   30630 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key
	I1004 03:18:36.278825   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt with IP's: []
	I1004 03:18:36.411462   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt ...
	I1004 03:18:36.411499   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt: {Name:mk5cbb9b0a13c8121c937d53956001313fc362d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.411652   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key ...
	I1004 03:18:36.411663   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key: {Name:mkcfa953ddb2aa55fb392dd2b0300dc4d7ed9a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.411729   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:18:36.411745   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:18:36.411758   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:18:36.411771   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:18:36.411798   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:18:36.411811   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:18:36.411823   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:18:36.411835   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:18:36.411884   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:18:36.411919   30630 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:18:36.411928   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:18:36.411953   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:18:36.411976   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:18:36.411996   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:18:36.412030   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:18:36.412053   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.412066   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.412078   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.412548   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:18:36.441146   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:18:36.468175   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:18:36.494488   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:18:36.520930   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1004 03:18:36.546306   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 03:18:36.571622   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:18:36.595650   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:18:36.619154   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:18:36.643284   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:18:36.666998   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:18:36.692308   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 03:18:36.710569   30630 ssh_runner.go:195] Run: openssl version
	I1004 03:18:36.722532   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:18:36.738971   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.743511   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.743568   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.749416   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:18:36.760315   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:18:36.771516   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.776032   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.776090   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.781784   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:18:36.792883   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:18:36.804051   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.808536   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.808596   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.814260   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:18:36.827637   30630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:18:36.833576   30630 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 03:18:36.833628   30630 kubeadm.go:392] StartCluster: {Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:18:36.833720   30630 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 03:18:36.833768   30630 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 03:18:36.890855   30630 cri.go:89] found id: ""
	I1004 03:18:36.890927   30630 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 03:18:36.902870   30630 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 03:18:36.912801   30630 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 03:18:36.922312   30630 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 03:18:36.922332   30630 kubeadm.go:157] found existing configuration files:
	
	I1004 03:18:36.922378   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 03:18:36.931373   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 03:18:36.931434   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 03:18:36.940992   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 03:18:36.949951   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 03:18:36.950008   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 03:18:36.959253   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 03:18:36.968235   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 03:18:36.968290   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 03:18:36.977554   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 03:18:36.986351   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 03:18:36.986408   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 03:18:36.995719   30630 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 03:18:37.089352   30630 kubeadm.go:310] W1004 03:18:37.073375     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 03:18:37.090411   30630 kubeadm.go:310] W1004 03:18:37.074383     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 03:18:37.191769   30630 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 03:18:47.918991   30630 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 03:18:47.919112   30630 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 03:18:47.919261   30630 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 03:18:47.919365   30630 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 03:18:47.919464   30630 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 03:18:47.919518   30630 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 03:18:47.920818   30630 out.go:235]   - Generating certificates and keys ...
	I1004 03:18:47.920882   30630 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 03:18:47.920936   30630 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 03:18:47.921009   30630 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 03:18:47.921075   30630 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1004 03:18:47.921133   30630 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1004 03:18:47.921203   30630 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1004 03:18:47.921280   30630 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1004 03:18:47.921443   30630 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-994751 localhost] and IPs [192.168.39.65 127.0.0.1 ::1]
	I1004 03:18:47.921519   30630 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1004 03:18:47.921666   30630 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-994751 localhost] and IPs [192.168.39.65 127.0.0.1 ::1]
	I1004 03:18:47.921762   30630 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 03:18:47.921849   30630 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 03:18:47.921910   30630 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1004 03:18:47.922005   30630 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 03:18:47.922057   30630 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 03:18:47.922112   30630 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 03:18:47.922177   30630 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 03:18:47.922290   30630 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 03:18:47.922377   30630 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 03:18:47.922447   30630 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 03:18:47.922519   30630 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 03:18:47.923983   30630 out.go:235]   - Booting up control plane ...
	I1004 03:18:47.924085   30630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 03:18:47.924153   30630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 03:18:47.924208   30630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 03:18:47.924334   30630 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 03:18:47.924425   30630 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 03:18:47.924472   30630 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 03:18:47.924582   30630 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 03:18:47.924675   30630 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 03:18:47.924735   30630 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001267899s
	I1004 03:18:47.924846   30630 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 03:18:47.924901   30630 kubeadm.go:310] [api-check] The API server is healthy after 5.62627754s
	I1004 03:18:47.924992   30630 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 03:18:47.925097   30630 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 03:18:47.925151   30630 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 03:18:47.925310   30630 kubeadm.go:310] [mark-control-plane] Marking the node ha-994751 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 03:18:47.925388   30630 kubeadm.go:310] [bootstrap-token] Using token: t8dola.kmwzcq881z4dnfcq
	I1004 03:18:47.926624   30630 out.go:235]   - Configuring RBAC rules ...
	I1004 03:18:47.926738   30630 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 03:18:47.926809   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 03:18:47.926957   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 03:18:47.927140   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 03:18:47.927310   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 03:18:47.927398   30630 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 03:18:47.927508   30630 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 03:18:47.927559   30630 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 03:18:47.927607   30630 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 03:18:47.927613   30630 kubeadm.go:310] 
	I1004 03:18:47.927661   30630 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 03:18:47.927667   30630 kubeadm.go:310] 
	I1004 03:18:47.927736   30630 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 03:18:47.927742   30630 kubeadm.go:310] 
	I1004 03:18:47.927764   30630 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 03:18:47.927863   30630 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 03:18:47.927918   30630 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 03:18:47.927926   30630 kubeadm.go:310] 
	I1004 03:18:47.927996   30630 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 03:18:47.928006   30630 kubeadm.go:310] 
	I1004 03:18:47.928052   30630 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 03:18:47.928059   30630 kubeadm.go:310] 
	I1004 03:18:47.928102   30630 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 03:18:47.928189   30630 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 03:18:47.928261   30630 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 03:18:47.928268   30630 kubeadm.go:310] 
	I1004 03:18:47.928337   30630 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 03:18:47.928401   30630 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 03:18:47.928407   30630 kubeadm.go:310] 
	I1004 03:18:47.928480   30630 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t8dola.kmwzcq881z4dnfcq \
	I1004 03:18:47.928565   30630 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 \
	I1004 03:18:47.928587   30630 kubeadm.go:310] 	--control-plane 
	I1004 03:18:47.928593   30630 kubeadm.go:310] 
	I1004 03:18:47.928677   30630 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 03:18:47.928689   30630 kubeadm.go:310] 
	I1004 03:18:47.928756   30630 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t8dola.kmwzcq881z4dnfcq \
	I1004 03:18:47.928856   30630 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 
	I1004 03:18:47.928865   30630 cni.go:84] Creating CNI manager for ""
	I1004 03:18:47.928870   30630 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1004 03:18:47.930177   30630 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1004 03:18:47.931356   30630 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1004 03:18:47.936846   30630 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1004 03:18:47.936861   30630 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1004 03:18:47.954946   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 03:18:48.341839   30630 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 03:18:48.341927   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-994751 minikube.k8s.io/updated_at=2024_10_04T03_18_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-994751 minikube.k8s.io/primary=true
	I1004 03:18:48.341931   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:48.378883   30630 ops.go:34] apiserver oom_adj: -16
	I1004 03:18:48.535248   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:49.035895   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:49.535506   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:50.036160   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:50.536177   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:51.036074   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:51.535453   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:52.036318   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:52.141351   30630 kubeadm.go:1113] duration metric: took 3.799503635s to wait for elevateKubeSystemPrivileges
	I1004 03:18:52.141482   30630 kubeadm.go:394] duration metric: took 15.307852794s to StartCluster
	I1004 03:18:52.141506   30630 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:52.141595   30630 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:18:52.142340   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:52.142543   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 03:18:52.142540   30630 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:18:52.142619   30630 start.go:241] waiting for startup goroutines ...
	I1004 03:18:52.142559   30630 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 03:18:52.142650   30630 addons.go:69] Setting default-storageclass=true in profile "ha-994751"
	I1004 03:18:52.142673   30630 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-994751"
	I1004 03:18:52.142653   30630 addons.go:69] Setting storage-provisioner=true in profile "ha-994751"
	I1004 03:18:52.142785   30630 addons.go:234] Setting addon storage-provisioner=true in "ha-994751"
	I1004 03:18:52.142836   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:18:52.142751   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:18:52.143105   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.143135   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.143203   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.143243   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.158739   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38855
	I1004 03:18:52.159139   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.159746   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.159801   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.160123   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.160704   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.160750   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.163696   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35851
	I1004 03:18:52.164259   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.164849   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.164876   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.165236   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.165397   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:52.167571   30630 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:18:52.167892   30630 kapi.go:59] client config for ha-994751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key", CAFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 03:18:52.168431   30630 cert_rotation.go:140] Starting client certificate rotation controller
	I1004 03:18:52.168621   30630 addons.go:234] Setting addon default-storageclass=true in "ha-994751"
	I1004 03:18:52.168661   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:18:52.168962   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.168995   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.177647   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33667
	I1004 03:18:52.178272   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.178780   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.178807   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.179185   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.179369   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:52.181245   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:52.182949   30630 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 03:18:52.184312   30630 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 03:18:52.184328   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 03:18:52.184342   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:52.185802   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36897
	I1004 03:18:52.186249   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.186707   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.186731   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.187053   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.187403   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.187660   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.187699   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.187838   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:52.187860   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.188023   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:52.188171   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:52.188318   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:52.188522   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:52.202680   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36473
	I1004 03:18:52.203159   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.203886   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.203918   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.204247   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.204428   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:52.205967   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:52.206173   30630 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 03:18:52.206189   30630 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 03:18:52.206206   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:52.208832   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.209269   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:52.209304   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.209405   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:52.209567   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:52.209709   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:52.209838   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:52.346822   30630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 03:18:52.355141   30630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 03:18:52.371008   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 03:18:52.715722   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.715742   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.716027   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.716068   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.716084   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.716095   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.716104   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.716350   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.716358   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.716370   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.716432   30630 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1004 03:18:52.716457   30630 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1004 03:18:52.716568   30630 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1004 03:18:52.716579   30630 round_trippers.go:469] Request Headers:
	I1004 03:18:52.716592   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:18:52.716603   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:18:52.723900   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:18:52.724457   30630 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1004 03:18:52.724472   30630 round_trippers.go:469] Request Headers:
	I1004 03:18:52.724481   30630 round_trippers.go:473]     Content-Type: application/json
	I1004 03:18:52.724485   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:18:52.724494   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:18:52.728158   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:18:52.728358   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.728379   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.728631   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.728667   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.728678   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.991032   30630 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1004 03:18:52.991106   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.991118   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.991464   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.991518   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.991525   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.991538   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.991549   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.991787   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.991815   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.991835   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.993564   30630 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1004 03:18:52.994914   30630 addons.go:510] duration metric: took 852.347466ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1004 03:18:52.994963   30630 start.go:246] waiting for cluster config update ...
	I1004 03:18:52.994978   30630 start.go:255] writing updated cluster config ...
	I1004 03:18:52.996475   30630 out.go:201] 
	I1004 03:18:52.997828   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:18:52.997937   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:52.999684   30630 out.go:177] * Starting "ha-994751-m02" control-plane node in "ha-994751" cluster
	I1004 03:18:53.001098   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:18:53.001129   30630 cache.go:56] Caching tarball of preloaded images
	I1004 03:18:53.001252   30630 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:18:53.001270   30630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:18:53.001389   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:53.001704   30630 start.go:360] acquireMachinesLock for ha-994751-m02: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:18:53.001767   30630 start.go:364] duration metric: took 36.717µs to acquireMachinesLock for "ha-994751-m02"
	I1004 03:18:53.001788   30630 start.go:93] Provisioning new machine with config: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:18:53.001888   30630 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1004 03:18:53.003601   30630 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 03:18:53.003685   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:53.003710   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:53.018286   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I1004 03:18:53.018739   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:53.019227   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:53.019248   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:53.019586   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:53.019746   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:18:53.019878   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:18:53.020036   30630 start.go:159] libmachine.API.Create for "ha-994751" (driver="kvm2")
	I1004 03:18:53.020058   30630 client.go:168] LocalClient.Create starting
	I1004 03:18:53.020084   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 03:18:53.020121   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:53.020141   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:53.020189   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 03:18:53.020206   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:53.020216   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:53.020231   30630 main.go:141] libmachine: Running pre-create checks...
	I1004 03:18:53.020238   30630 main.go:141] libmachine: (ha-994751-m02) Calling .PreCreateCheck
	I1004 03:18:53.020407   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetConfigRaw
	I1004 03:18:53.020742   30630 main.go:141] libmachine: Creating machine...
	I1004 03:18:53.020759   30630 main.go:141] libmachine: (ha-994751-m02) Calling .Create
	I1004 03:18:53.020907   30630 main.go:141] libmachine: (ha-994751-m02) Creating KVM machine...
	I1004 03:18:53.022100   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found existing default KVM network
	I1004 03:18:53.022275   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found existing private KVM network mk-ha-994751
	I1004 03:18:53.022411   30630 main.go:141] libmachine: (ha-994751-m02) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02 ...
	I1004 03:18:53.022435   30630 main.go:141] libmachine: (ha-994751-m02) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 03:18:53.022495   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.022407   31016 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:53.022574   30630 main.go:141] libmachine: (ha-994751-m02) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 03:18:53.247842   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.247679   31016 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa...
	I1004 03:18:53.574709   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.574567   31016 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/ha-994751-m02.rawdisk...
	I1004 03:18:53.574744   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Writing magic tar header
	I1004 03:18:53.574759   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Writing SSH key tar header
	I1004 03:18:53.574776   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.574706   31016 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02 ...
	I1004 03:18:53.574856   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02
	I1004 03:18:53.574886   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02 (perms=drwx------)
	I1004 03:18:53.574896   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 03:18:53.574906   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 03:18:53.574926   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 03:18:53.574938   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 03:18:53.574962   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 03:18:53.574971   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:53.574979   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 03:18:53.574992   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 03:18:53.575005   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 03:18:53.575014   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins
	I1004 03:18:53.575020   30630 main.go:141] libmachine: (ha-994751-m02) Creating domain...
	I1004 03:18:53.575036   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home
	I1004 03:18:53.575046   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Skipping /home - not owner
	I1004 03:18:53.575952   30630 main.go:141] libmachine: (ha-994751-m02) define libvirt domain using xml: 
	I1004 03:18:53.575978   30630 main.go:141] libmachine: (ha-994751-m02) <domain type='kvm'>
	I1004 03:18:53.575998   30630 main.go:141] libmachine: (ha-994751-m02)   <name>ha-994751-m02</name>
	I1004 03:18:53.576012   30630 main.go:141] libmachine: (ha-994751-m02)   <memory unit='MiB'>2200</memory>
	I1004 03:18:53.576021   30630 main.go:141] libmachine: (ha-994751-m02)   <vcpu>2</vcpu>
	I1004 03:18:53.576030   30630 main.go:141] libmachine: (ha-994751-m02)   <features>
	I1004 03:18:53.576038   30630 main.go:141] libmachine: (ha-994751-m02)     <acpi/>
	I1004 03:18:53.576047   30630 main.go:141] libmachine: (ha-994751-m02)     <apic/>
	I1004 03:18:53.576055   30630 main.go:141] libmachine: (ha-994751-m02)     <pae/>
	I1004 03:18:53.576064   30630 main.go:141] libmachine: (ha-994751-m02)     
	I1004 03:18:53.576072   30630 main.go:141] libmachine: (ha-994751-m02)   </features>
	I1004 03:18:53.576082   30630 main.go:141] libmachine: (ha-994751-m02)   <cpu mode='host-passthrough'>
	I1004 03:18:53.576089   30630 main.go:141] libmachine: (ha-994751-m02)   
	I1004 03:18:53.576099   30630 main.go:141] libmachine: (ha-994751-m02)   </cpu>
	I1004 03:18:53.576106   30630 main.go:141] libmachine: (ha-994751-m02)   <os>
	I1004 03:18:53.576119   30630 main.go:141] libmachine: (ha-994751-m02)     <type>hvm</type>
	I1004 03:18:53.576130   30630 main.go:141] libmachine: (ha-994751-m02)     <boot dev='cdrom'/>
	I1004 03:18:53.576135   30630 main.go:141] libmachine: (ha-994751-m02)     <boot dev='hd'/>
	I1004 03:18:53.576144   30630 main.go:141] libmachine: (ha-994751-m02)     <bootmenu enable='no'/>
	I1004 03:18:53.576152   30630 main.go:141] libmachine: (ha-994751-m02)   </os>
	I1004 03:18:53.576165   30630 main.go:141] libmachine: (ha-994751-m02)   <devices>
	I1004 03:18:53.576176   30630 main.go:141] libmachine: (ha-994751-m02)     <disk type='file' device='cdrom'>
	I1004 03:18:53.576189   30630 main.go:141] libmachine: (ha-994751-m02)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/boot2docker.iso'/>
	I1004 03:18:53.576200   30630 main.go:141] libmachine: (ha-994751-m02)       <target dev='hdc' bus='scsi'/>
	I1004 03:18:53.576208   30630 main.go:141] libmachine: (ha-994751-m02)       <readonly/>
	I1004 03:18:53.576216   30630 main.go:141] libmachine: (ha-994751-m02)     </disk>
	I1004 03:18:53.576224   30630 main.go:141] libmachine: (ha-994751-m02)     <disk type='file' device='disk'>
	I1004 03:18:53.576236   30630 main.go:141] libmachine: (ha-994751-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 03:18:53.576251   30630 main.go:141] libmachine: (ha-994751-m02)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/ha-994751-m02.rawdisk'/>
	I1004 03:18:53.576261   30630 main.go:141] libmachine: (ha-994751-m02)       <target dev='hda' bus='virtio'/>
	I1004 03:18:53.576285   30630 main.go:141] libmachine: (ha-994751-m02)     </disk>
	I1004 03:18:53.576307   30630 main.go:141] libmachine: (ha-994751-m02)     <interface type='network'>
	I1004 03:18:53.576317   30630 main.go:141] libmachine: (ha-994751-m02)       <source network='mk-ha-994751'/>
	I1004 03:18:53.576324   30630 main.go:141] libmachine: (ha-994751-m02)       <model type='virtio'/>
	I1004 03:18:53.576335   30630 main.go:141] libmachine: (ha-994751-m02)     </interface>
	I1004 03:18:53.576342   30630 main.go:141] libmachine: (ha-994751-m02)     <interface type='network'>
	I1004 03:18:53.576368   30630 main.go:141] libmachine: (ha-994751-m02)       <source network='default'/>
	I1004 03:18:53.576386   30630 main.go:141] libmachine: (ha-994751-m02)       <model type='virtio'/>
	I1004 03:18:53.576401   30630 main.go:141] libmachine: (ha-994751-m02)     </interface>
	I1004 03:18:53.576413   30630 main.go:141] libmachine: (ha-994751-m02)     <serial type='pty'>
	I1004 03:18:53.576421   30630 main.go:141] libmachine: (ha-994751-m02)       <target port='0'/>
	I1004 03:18:53.576429   30630 main.go:141] libmachine: (ha-994751-m02)     </serial>
	I1004 03:18:53.576437   30630 main.go:141] libmachine: (ha-994751-m02)     <console type='pty'>
	I1004 03:18:53.576447   30630 main.go:141] libmachine: (ha-994751-m02)       <target type='serial' port='0'/>
	I1004 03:18:53.576455   30630 main.go:141] libmachine: (ha-994751-m02)     </console>
	I1004 03:18:53.576462   30630 main.go:141] libmachine: (ha-994751-m02)     <rng model='virtio'>
	I1004 03:18:53.576468   30630 main.go:141] libmachine: (ha-994751-m02)       <backend model='random'>/dev/random</backend>
	I1004 03:18:53.576474   30630 main.go:141] libmachine: (ha-994751-m02)     </rng>
	I1004 03:18:53.576479   30630 main.go:141] libmachine: (ha-994751-m02)     
	I1004 03:18:53.576482   30630 main.go:141] libmachine: (ha-994751-m02)     
	I1004 03:18:53.576487   30630 main.go:141] libmachine: (ha-994751-m02)   </devices>
	I1004 03:18:53.576497   30630 main.go:141] libmachine: (ha-994751-m02) </domain>
	I1004 03:18:53.576508   30630 main.go:141] libmachine: (ha-994751-m02) 
	I1004 03:18:53.583962   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:dd:b1:40 in network default
	I1004 03:18:53.584709   30630 main.go:141] libmachine: (ha-994751-m02) Ensuring networks are active...
	I1004 03:18:53.584740   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:53.585441   30630 main.go:141] libmachine: (ha-994751-m02) Ensuring network default is active
	I1004 03:18:53.585785   30630 main.go:141] libmachine: (ha-994751-m02) Ensuring network mk-ha-994751 is active
	I1004 03:18:53.586177   30630 main.go:141] libmachine: (ha-994751-m02) Getting domain xml...
	I1004 03:18:53.586870   30630 main.go:141] libmachine: (ha-994751-m02) Creating domain...
	I1004 03:18:54.836669   30630 main.go:141] libmachine: (ha-994751-m02) Waiting to get IP...
	I1004 03:18:54.837648   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:54.838068   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:54.838093   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:54.838048   31016 retry.go:31] will retry after 198.927613ms: waiting for machine to come up
	I1004 03:18:55.038453   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:55.038905   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:55.039050   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:55.039003   31016 retry.go:31] will retry after 306.415928ms: waiting for machine to come up
	I1004 03:18:55.347491   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:55.347913   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:55.347941   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:55.347876   31016 retry.go:31] will retry after 320.808758ms: waiting for machine to come up
	I1004 03:18:55.670381   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:55.670806   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:55.670832   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:55.670773   31016 retry.go:31] will retry after 393.714723ms: waiting for machine to come up
	I1004 03:18:56.066334   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:56.066789   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:56.066816   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:56.066737   31016 retry.go:31] will retry after 703.186123ms: waiting for machine to come up
	I1004 03:18:56.771284   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:56.771771   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:56.771816   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:56.771717   31016 retry.go:31] will retry after 687.11987ms: waiting for machine to come up
	I1004 03:18:57.460710   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:57.461089   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:57.461132   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:57.461080   31016 retry.go:31] will retry after 992.439827ms: waiting for machine to come up
	I1004 03:18:58.455669   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:58.456094   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:58.456109   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:58.456062   31016 retry.go:31] will retry after 1.176479657s: waiting for machine to come up
	I1004 03:18:59.634390   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:59.634814   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:59.634839   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:59.634775   31016 retry.go:31] will retry after 1.214254179s: waiting for machine to come up
	I1004 03:19:00.850238   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:00.850699   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:00.850731   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:00.850669   31016 retry.go:31] will retry after 1.755607467s: waiting for machine to come up
	I1004 03:19:02.608547   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:02.608946   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:02.608966   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:02.608910   31016 retry.go:31] will retry after 1.912286614s: waiting for machine to come up
	I1004 03:19:04.522463   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:04.522888   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:04.522917   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:04.522826   31016 retry.go:31] will retry after 2.242710645s: waiting for machine to come up
	I1004 03:19:06.766980   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:06.767510   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:06.767541   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:06.767449   31016 retry.go:31] will retry after 3.842874805s: waiting for machine to come up
	I1004 03:19:10.612857   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:10.613334   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:10.613359   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:10.613293   31016 retry.go:31] will retry after 4.05361864s: waiting for machine to come up
	I1004 03:19:14.669514   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.670029   30630 main.go:141] libmachine: (ha-994751-m02) Found IP for machine: 192.168.39.117
	I1004 03:19:14.670051   30630 main.go:141] libmachine: (ha-994751-m02) Reserving static IP address...
	I1004 03:19:14.670068   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has current primary IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.670622   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find host DHCP lease matching {name: "ha-994751-m02", mac: "52:54:00:b0:e7:80", ip: "192.168.39.117"} in network mk-ha-994751
	I1004 03:19:14.745981   30630 main.go:141] libmachine: (ha-994751-m02) Reserved static IP address: 192.168.39.117
	I1004 03:19:14.746008   30630 main.go:141] libmachine: (ha-994751-m02) Waiting for SSH to be available...
	I1004 03:19:14.746017   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Getting to WaitForSSH function...
	I1004 03:19:14.748804   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.749281   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:14.749310   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.749511   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Using SSH client type: external
	I1004 03:19:14.749551   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa (-rw-------)
	I1004 03:19:14.749581   30630 main.go:141] libmachine: (ha-994751-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:19:14.749606   30630 main.go:141] libmachine: (ha-994751-m02) DBG | About to run SSH command:
	I1004 03:19:14.749624   30630 main.go:141] libmachine: (ha-994751-m02) DBG | exit 0
	I1004 03:19:14.876139   30630 main.go:141] libmachine: (ha-994751-m02) DBG | SSH cmd err, output: <nil>: 
	I1004 03:19:14.876447   30630 main.go:141] libmachine: (ha-994751-m02) KVM machine creation complete!
	I1004 03:19:14.876809   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetConfigRaw
	I1004 03:19:14.877356   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:14.877589   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:14.877768   30630 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 03:19:14.877780   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetState
	I1004 03:19:14.879122   30630 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 03:19:14.879138   30630 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 03:19:14.879143   30630 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 03:19:14.879149   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:14.881593   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.881953   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:14.881980   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.882095   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:14.882322   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.882470   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.882643   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:14.882838   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:14.883073   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:14.883086   30630 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 03:19:14.983285   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:19:14.983306   30630 main.go:141] libmachine: Detecting the provisioner...
	I1004 03:19:14.983312   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:14.986285   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.986741   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:14.986757   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.987055   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:14.987278   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.987439   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.987656   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:14.987873   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:14.988031   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:14.988042   30630 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 03:19:15.088950   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 03:19:15.089011   30630 main.go:141] libmachine: found compatible host: buildroot
	I1004 03:19:15.089017   30630 main.go:141] libmachine: Provisioning with buildroot...
	I1004 03:19:15.089024   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:19:15.089254   30630 buildroot.go:166] provisioning hostname "ha-994751-m02"
	I1004 03:19:15.089274   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:19:15.089431   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.092470   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.092890   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.092918   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.093111   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.093289   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.093421   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.093532   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.093663   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:15.093819   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:15.093835   30630 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-994751-m02 && echo "ha-994751-m02" | sudo tee /etc/hostname
	I1004 03:19:15.206985   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751-m02
	
	I1004 03:19:15.207013   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.210129   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.210417   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.210457   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.210609   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.210806   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.210951   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.211140   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.211322   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:15.211488   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:15.211503   30630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-994751-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-994751-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-994751-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:19:15.321696   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:19:15.321728   30630 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:19:15.321748   30630 buildroot.go:174] setting up certificates
	I1004 03:19:15.321761   30630 provision.go:84] configureAuth start
	I1004 03:19:15.321773   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:19:15.322055   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:15.324655   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.325067   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.325090   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.325226   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.327479   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.327889   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.327929   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.328106   30630 provision.go:143] copyHostCerts
	I1004 03:19:15.328139   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:19:15.328171   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:19:15.328185   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:19:15.328272   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:19:15.328393   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:19:15.328420   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:19:15.328430   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:19:15.328468   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:19:15.328620   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:19:15.328652   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:19:15.328662   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:19:15.328718   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:19:15.328821   30630 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.ha-994751-m02 san=[127.0.0.1 192.168.39.117 ha-994751-m02 localhost minikube]
	I1004 03:19:15.560527   30630 provision.go:177] copyRemoteCerts
	I1004 03:19:15.560590   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:19:15.560612   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.563747   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.564236   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.564307   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.564520   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.564706   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.564861   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.565036   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:15.646851   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:19:15.646919   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 03:19:15.672945   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:19:15.673021   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:19:15.699880   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:19:15.699960   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:19:15.725929   30630 provision.go:87] duration metric: took 404.139584ms to configureAuth
	I1004 03:19:15.725975   30630 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:19:15.726189   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:19:15.726282   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.729150   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.729586   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.729623   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.729761   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.729951   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.730107   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.730283   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.730477   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:15.730682   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:15.730704   30630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:19:15.953783   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:19:15.953808   30630 main.go:141] libmachine: Checking connection to Docker...
	I1004 03:19:15.953817   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetURL
	I1004 03:19:15.955088   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Using libvirt version 6000000
	I1004 03:19:15.957213   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.957617   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.957642   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.957827   30630 main.go:141] libmachine: Docker is up and running!
	I1004 03:19:15.957841   30630 main.go:141] libmachine: Reticulating splines...
	I1004 03:19:15.957847   30630 client.go:171] duration metric: took 22.937783647s to LocalClient.Create
	I1004 03:19:15.957867   30630 start.go:167] duration metric: took 22.937832099s to libmachine.API.Create "ha-994751"
	I1004 03:19:15.957875   30630 start.go:293] postStartSetup for "ha-994751-m02" (driver="kvm2")
	I1004 03:19:15.957884   30630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:19:15.957899   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:15.958102   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:19:15.958124   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.960392   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.960717   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.960745   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.960883   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.961062   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.961225   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.961368   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:16.042404   30630 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:19:16.047363   30630 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:19:16.047388   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:19:16.047468   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:19:16.047535   30630 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:19:16.047546   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:19:16.047622   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:19:16.057062   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:19:16.082885   30630 start.go:296] duration metric: took 124.998047ms for postStartSetup
	I1004 03:19:16.082935   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetConfigRaw
	I1004 03:19:16.083581   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:16.086204   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.086582   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.086605   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.086841   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:19:16.087032   30630 start.go:128] duration metric: took 23.085132614s to createHost
	I1004 03:19:16.087053   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:16.089417   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.089782   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.089807   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.089984   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:16.090129   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.090241   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.090315   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:16.090436   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:16.090606   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:16.090615   30630 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:19:16.192923   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011956.165669680
	
	I1004 03:19:16.192949   30630 fix.go:216] guest clock: 1728011956.165669680
	I1004 03:19:16.192957   30630 fix.go:229] Guest: 2024-10-04 03:19:16.16566968 +0000 UTC Remote: 2024-10-04 03:19:16.08704226 +0000 UTC m=+70.399873263 (delta=78.62742ms)
	I1004 03:19:16.192972   30630 fix.go:200] guest clock delta is within tolerance: 78.62742ms
	I1004 03:19:16.192978   30630 start.go:83] releasing machines lock for "ha-994751-m02", held for 23.191201934s
	I1004 03:19:16.193000   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.193291   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:16.196268   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.196769   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.196799   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.199156   30630 out.go:177] * Found network options:
	I1004 03:19:16.200650   30630 out.go:177]   - NO_PROXY=192.168.39.65
	W1004 03:19:16.201984   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:19:16.202013   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.202608   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.202783   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.202904   30630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:19:16.202945   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	W1004 03:19:16.203033   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:19:16.203114   30630 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:19:16.203136   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:16.205729   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.205978   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.206109   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.206134   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.206286   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:16.206384   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.206425   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.206455   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.206610   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:16.206681   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:16.206748   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:16.206786   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.206947   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:16.207052   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:16.451088   30630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 03:19:16.457611   30630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:19:16.457679   30630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:19:16.474500   30630 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 03:19:16.474524   30630 start.go:495] detecting cgroup driver to use...
	I1004 03:19:16.474577   30630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:19:16.491337   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:19:16.505852   30630 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:19:16.505915   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:19:16.519394   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:19:16.533389   30630 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:19:16.647440   30630 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:19:16.796026   30630 docker.go:233] disabling docker service ...
	I1004 03:19:16.796090   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:19:16.810390   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:19:16.824447   30630 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:19:16.967078   30630 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:19:17.099949   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:19:17.114752   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:19:17.134460   30630 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:19:17.134514   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.144920   30630 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:19:17.144984   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.155252   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.165315   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.175583   30630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:19:17.186303   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.198678   30630 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.217975   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.229419   30630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:19:17.241337   30630 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 03:19:17.241386   30630 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 03:19:17.254390   30630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:19:17.264806   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:19:17.402028   30630 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:19:17.495758   30630 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:19:17.495841   30630 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:19:17.500623   30630 start.go:563] Will wait 60s for crictl version
	I1004 03:19:17.500678   30630 ssh_runner.go:195] Run: which crictl
	I1004 03:19:17.504705   30630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:19:17.550368   30630 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:19:17.550468   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:19:17.578910   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:19:17.612824   30630 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:19:17.614302   30630 out.go:177]   - env NO_PROXY=192.168.39.65
	I1004 03:19:17.615583   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:17.618499   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:17.619022   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:17.619049   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:17.619276   30630 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:19:17.623687   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:19:17.636797   30630 mustload.go:65] Loading cluster: ha-994751
	I1004 03:19:17.637003   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:19:17.637273   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:19:17.637322   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:19:17.651836   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38711
	I1004 03:19:17.652278   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:19:17.652784   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:19:17.652801   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:19:17.653111   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:19:17.653311   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:19:17.654878   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:19:17.655231   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:19:17.655273   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:19:17.669844   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I1004 03:19:17.670238   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:19:17.670702   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:19:17.670716   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:19:17.671055   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:19:17.671261   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:19:17.671448   30630 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751 for IP: 192.168.39.117
	I1004 03:19:17.671472   30630 certs.go:194] generating shared ca certs ...
	I1004 03:19:17.671486   30630 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:19:17.671619   30630 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:19:17.671665   30630 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:19:17.671678   30630 certs.go:256] generating profile certs ...
	I1004 03:19:17.671769   30630 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key
	I1004 03:19:17.671816   30630 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb
	I1004 03:19:17.671836   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65 192.168.39.117 192.168.39.254]
	I1004 03:19:17.982961   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb ...
	I1004 03:19:17.982990   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb: {Name:mka857c573044186dc7f952f5b2ab8a540e4e52f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:19:17.983170   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb ...
	I1004 03:19:17.983188   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb: {Name:mka872bfad80f36ccf6cfb0285b019b3212263dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:19:17.983268   30630 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt
	I1004 03:19:17.983413   30630 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key
	I1004 03:19:17.983593   30630 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key
	I1004 03:19:17.983610   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:19:17.983628   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:19:17.983649   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:19:17.983666   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:19:17.983685   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:19:17.983700   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:19:17.983717   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:19:17.983736   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:19:17.983821   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:19:17.983865   30630 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:19:17.983877   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:19:17.983909   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:19:17.983943   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:19:17.984054   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:19:17.984129   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:19:17.984175   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:19:17.984197   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:17.984216   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:19:17.984276   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:19:17.987517   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:17.987891   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:19:17.987919   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:17.988138   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:19:17.988361   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:19:17.988505   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:19:17.988670   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:19:18.060182   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1004 03:19:18.065324   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1004 03:19:18.078017   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1004 03:19:18.082669   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1004 03:19:18.094668   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1004 03:19:18.099036   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1004 03:19:18.110596   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1004 03:19:18.115397   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1004 03:19:18.126291   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1004 03:19:18.131864   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1004 03:19:18.143496   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1004 03:19:18.147678   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1004 03:19:18.158714   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:19:18.185425   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:19:18.212989   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:19:18.238721   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:19:18.265688   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1004 03:19:18.292564   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 03:19:18.318046   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:19:18.343621   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:19:18.367533   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:19:18.391460   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:19:18.414533   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:19:18.437881   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1004 03:19:18.454162   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1004 03:19:18.470435   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1004 03:19:18.487697   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1004 03:19:18.504422   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1004 03:19:18.521609   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1004 03:19:18.538712   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1004 03:19:18.555759   30630 ssh_runner.go:195] Run: openssl version
	I1004 03:19:18.561485   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:19:18.572838   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:19:18.578085   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:19:18.578150   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:19:18.584699   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:19:18.596515   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:19:18.608107   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:18.613090   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:18.613151   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:18.619060   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:19:18.630222   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:19:18.642211   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:19:18.646675   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:19:18.646733   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:19:18.652690   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:19:18.663892   30630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:19:18.668101   30630 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 03:19:18.668177   30630 kubeadm.go:934] updating node {m02 192.168.39.117 8443 v1.31.1 crio true true} ...
	I1004 03:19:18.668262   30630 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-994751-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:19:18.668287   30630 kube-vip.go:115] generating kube-vip config ...
	I1004 03:19:18.668368   30630 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1004 03:19:18.686599   30630 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1004 03:19:18.686662   30630 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:19:18.686715   30630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:19:18.697844   30630 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1004 03:19:18.697908   30630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1004 03:19:18.708942   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1004 03:19:18.708972   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:19:18.708991   30630 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1004 03:19:18.709028   30630 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1004 03:19:18.709031   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:19:18.713612   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1004 03:19:18.713636   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1004 03:19:19.809158   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:19:19.826203   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:19:19.826314   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:19:19.830837   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1004 03:19:19.830871   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1004 03:19:19.978327   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:19:19.978413   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:19:19.988543   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1004 03:19:19.988589   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1004 03:19:20.364768   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1004 03:19:20.374518   30630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1004 03:19:20.391501   30630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:19:20.408160   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1004 03:19:20.424511   30630 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:19:20.428280   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:19:20.439917   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:19:20.559800   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:19:20.576330   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:19:20.576654   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:19:20.576692   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:19:20.592425   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I1004 03:19:20.593014   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:19:20.593564   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:19:20.593590   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:19:20.593896   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:19:20.594067   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:19:20.594173   30630 start.go:317] joinCluster: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:19:20.594288   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1004 03:19:20.594307   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:19:20.597288   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:20.597706   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:19:20.597738   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:20.597851   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:19:20.598146   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:19:20.598359   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:19:20.598601   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:19:20.751261   30630 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:19:20.751313   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tfpvu2.gfmxns87jp8m6lea --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m02 --control-plane --apiserver-advertise-address=192.168.39.117 --apiserver-bind-port=8443"
	I1004 03:19:42.477327   30630 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tfpvu2.gfmxns87jp8m6lea --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m02 --control-plane --apiserver-advertise-address=192.168.39.117 --apiserver-bind-port=8443": (21.725989536s)
	I1004 03:19:42.477374   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1004 03:19:43.011388   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-994751-m02 minikube.k8s.io/updated_at=2024_10_04T03_19_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-994751 minikube.k8s.io/primary=false
	I1004 03:19:43.128289   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-994751-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1004 03:19:43.240778   30630 start.go:319] duration metric: took 22.646600164s to joinCluster
	I1004 03:19:43.240848   30630 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:19:43.241147   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:19:43.242449   30630 out.go:177] * Verifying Kubernetes components...
	I1004 03:19:43.243651   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:19:43.505854   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:19:43.526989   30630 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:19:43.527348   30630 kapi.go:59] client config for ha-994751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key", CAFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1004 03:19:43.527435   30630 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.65:8443
	I1004 03:19:43.527706   30630 node_ready.go:35] waiting up to 6m0s for node "ha-994751-m02" to be "Ready" ...
	I1004 03:19:43.527836   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:43.527848   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:43.527859   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:43.527864   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:43.538086   30630 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1004 03:19:44.028570   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:44.028592   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:44.028599   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:44.028604   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:44.034683   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:19:44.528680   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:44.528707   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:44.528719   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:44.528727   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:44.532210   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:45.028095   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:45.028116   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:45.028124   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:45.028128   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:45.031650   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:45.528659   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:45.528681   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:45.528689   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:45.528693   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:45.532032   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:45.532726   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:46.028184   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:46.028208   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:46.028220   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:46.028224   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:46.031876   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:46.528850   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:46.528870   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:46.528878   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:46.528883   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:46.532535   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:47.028593   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:47.028614   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:47.028622   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:47.028625   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:47.032488   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:47.528380   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:47.528406   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:47.528417   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:47.528423   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:47.532834   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:47.533292   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:48.028846   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:48.028866   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:48.028876   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:48.028879   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:48.033387   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:48.527941   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:48.527965   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:48.527976   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:48.527982   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:48.531255   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:49.027941   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:49.027974   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:49.027982   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:49.027985   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:49.032078   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:49.527942   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:49.527977   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:49.527988   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:49.527993   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:49.531336   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:50.027938   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:50.027962   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:50.027970   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:50.027975   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:50.031574   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:50.032261   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:50.528731   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:50.528756   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:50.528762   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:50.528766   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:50.533072   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:51.028280   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:51.028305   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:51.028315   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:51.028318   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:51.031958   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:51.527942   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:51.527963   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:51.527971   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:51.527975   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:51.531671   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:52.028715   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:52.028739   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:52.028747   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:52.028752   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:52.032273   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:52.032782   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:52.528521   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:52.528543   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:52.528553   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:52.528556   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:52.532328   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:53.028497   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:53.028519   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:53.028533   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:53.028536   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:53.031845   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:53.527963   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:53.527986   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:53.527995   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:53.527999   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:53.531468   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:54.028502   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:54.028524   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:54.028533   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:54.028537   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:54.032380   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:54.032974   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:54.528253   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:54.528276   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:54.528286   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:54.528293   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:54.531649   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:55.028786   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:55.028804   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:55.028812   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:55.028817   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:55.032371   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:55.527931   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:55.527953   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:55.527961   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:55.527965   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:55.531477   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:56.028492   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:56.028512   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:56.028519   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:56.028524   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:56.031319   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:19:56.527963   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:56.527981   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:56.527990   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:56.527993   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:56.531347   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:56.531854   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:57.027943   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:57.027962   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:57.027970   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:57.027979   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:57.031176   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:57.527972   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:57.527995   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:57.528006   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:57.528011   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:57.531355   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:58.028084   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:58.028103   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:58.028111   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:58.028115   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:58.034080   30630 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 03:19:58.527939   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:58.527959   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:58.527967   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:58.527972   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:58.530892   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:19:59.027908   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:59.027929   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:59.027938   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:59.027943   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:59.031093   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:59.031750   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:59.528117   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:59.528140   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:59.528148   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:59.528152   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:59.531338   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.027934   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:00.027956   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.027964   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.027968   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.031243   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.527969   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:00.527990   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.527998   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.528002   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.535322   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:20:00.536101   30630 node_ready.go:49] node "ha-994751-m02" has status "Ready":"True"
	I1004 03:20:00.536141   30630 node_ready.go:38] duration metric: took 17.008396711s for node "ha-994751-m02" to be "Ready" ...
	I1004 03:20:00.536154   30630 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:20:00.536255   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:00.536269   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.536281   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.536287   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.550231   30630 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1004 03:20:00.558943   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.559041   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-l6zst
	I1004 03:20:00.559052   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.559063   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.559071   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.562462   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.563534   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.563551   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.563558   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.563562   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.566458   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.567373   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.567390   30630 pod_ready.go:82] duration metric: took 8.418573ms for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.567399   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.567443   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgdck
	I1004 03:20:00.567450   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.567457   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.567461   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.571010   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.572015   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.572028   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.572035   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.572040   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.574144   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.574637   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.574653   30630 pod_ready.go:82] duration metric: took 7.248385ms for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.574660   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.574701   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751
	I1004 03:20:00.574708   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.574714   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.574718   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.577426   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.578237   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.578256   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.578262   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.578268   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.581297   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.582104   30630 pod_ready.go:93] pod "etcd-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.582124   30630 pod_ready.go:82] duration metric: took 7.457921ms for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.582136   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.582194   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m02
	I1004 03:20:00.582206   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.582213   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.582218   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.584954   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.586074   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:00.586089   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.586096   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.586098   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.588315   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.588797   30630 pod_ready.go:93] pod "etcd-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.588819   30630 pod_ready.go:82] duration metric: took 6.675728ms for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.588836   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.728447   30630 request.go:632] Waited for 139.544334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:20:00.728509   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:20:00.728514   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.728522   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.728527   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.732242   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.928492   30630 request.go:632] Waited for 195.478493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.928550   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.928556   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.928563   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.928567   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.932014   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.932660   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.932680   30630 pod_ready.go:82] duration metric: took 343.837498ms for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.932690   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.128708   30630 request.go:632] Waited for 195.949159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:20:01.128769   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:20:01.128778   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.128786   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.128790   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.131924   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.328936   30630 request.go:632] Waited for 196.247417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:01.328982   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:01.328986   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.328993   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.328999   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.332116   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.332718   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:01.332735   30630 pod_ready.go:82] duration metric: took 400.039408ms for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.332744   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.528985   30630 request.go:632] Waited for 196.178172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:20:01.529051   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:20:01.529057   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.529064   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.529068   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.532813   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.728751   30630 request.go:632] Waited for 195.374296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:01.728822   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:01.728828   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.728835   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.728838   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.732685   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.733267   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:01.733284   30630 pod_ready.go:82] duration metric: took 400.533757ms for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.733292   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.928444   30630 request.go:632] Waited for 195.093384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:20:01.928511   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:20:01.928517   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.928523   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.928531   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.931659   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.128724   30630 request.go:632] Waited for 196.347214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.128778   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.128783   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.128789   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.128794   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.132222   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.132803   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:02.132822   30630 pod_ready.go:82] duration metric: took 399.524177ms for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.132832   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.328210   30630 request.go:632] Waited for 195.309099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:20:02.328274   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:20:02.328281   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.328288   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.328293   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.331313   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.528409   30630 request.go:632] Waited for 196.390078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:02.528468   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:02.528474   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.528481   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.528486   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.531912   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.532422   30630 pod_ready.go:93] pod "kube-proxy-f44b9" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:02.532446   30630 pod_ready.go:82] duration metric: took 399.600972ms for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.532455   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.728449   30630 request.go:632] Waited for 195.932314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:20:02.728525   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:20:02.728531   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.728539   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.728547   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.732138   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.928159   30630 request.go:632] Waited for 195.316789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.928222   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.928227   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.928234   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.928238   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.931607   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.932124   30630 pod_ready.go:93] pod "kube-proxy-ph6cf" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:02.932148   30630 pod_ready.go:82] duration metric: took 399.687611ms for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.932157   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.128514   30630 request.go:632] Waited for 196.295312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:20:03.128566   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:20:03.128571   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.128579   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.128585   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.131954   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.328958   30630 request.go:632] Waited for 196.406685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:03.329017   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:03.329023   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.329031   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.329039   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.332357   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.332971   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:03.332988   30630 pod_ready.go:82] duration metric: took 400.824355ms for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.332997   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.528105   30630 request.go:632] Waited for 195.029512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:20:03.528157   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:20:03.528162   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.528169   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.528173   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.531733   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.727947   30630 request.go:632] Waited for 195.304105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:03.728022   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:03.728029   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.728038   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.728046   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.731222   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.731799   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:03.731823   30630 pod_ready.go:82] duration metric: took 398.818433ms for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.731836   30630 pod_ready.go:39] duration metric: took 3.195663558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:20:03.731854   30630 api_server.go:52] waiting for apiserver process to appear ...
	I1004 03:20:03.731914   30630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:20:03.748156   30630 api_server.go:72] duration metric: took 20.507274316s to wait for apiserver process to appear ...
	I1004 03:20:03.748186   30630 api_server.go:88] waiting for apiserver healthz status ...
	I1004 03:20:03.748208   30630 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I1004 03:20:03.752562   30630 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I1004 03:20:03.752615   30630 round_trippers.go:463] GET https://192.168.39.65:8443/version
	I1004 03:20:03.752620   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.752627   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.752633   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.753368   30630 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1004 03:20:03.753569   30630 api_server.go:141] control plane version: v1.31.1
	I1004 03:20:03.753592   30630 api_server.go:131] duration metric: took 5.397003ms to wait for apiserver health ...
	I1004 03:20:03.753601   30630 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 03:20:03.928947   30630 request.go:632] Waited for 175.282043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:03.929032   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:03.929040   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.929049   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.929055   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.934063   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:20:03.938318   30630 system_pods.go:59] 17 kube-system pods found
	I1004 03:20:03.938350   30630 system_pods.go:61] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:20:03.938358   30630 system_pods.go:61] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:20:03.938363   30630 system_pods.go:61] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:20:03.938369   30630 system_pods.go:61] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:20:03.938373   30630 system_pods.go:61] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:20:03.938378   30630 system_pods.go:61] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:20:03.938383   30630 system_pods.go:61] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:20:03.938387   30630 system_pods.go:61] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:20:03.938392   30630 system_pods.go:61] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:20:03.938397   30630 system_pods.go:61] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:20:03.938402   30630 system_pods.go:61] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:20:03.938408   30630 system_pods.go:61] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:20:03.938416   30630 system_pods.go:61] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:20:03.938422   30630 system_pods.go:61] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:20:03.938430   30630 system_pods.go:61] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:20:03.938435   30630 system_pods.go:61] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:20:03.938440   30630 system_pods.go:61] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:20:03.938450   30630 system_pods.go:74] duration metric: took 184.842668ms to wait for pod list to return data ...
	I1004 03:20:03.938469   30630 default_sa.go:34] waiting for default service account to be created ...
	I1004 03:20:04.128894   30630 request.go:632] Waited for 190.327691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:20:04.128944   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:20:04.128949   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:04.128956   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:04.128960   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:04.132905   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:04.133105   30630 default_sa.go:45] found service account: "default"
	I1004 03:20:04.133122   30630 default_sa.go:55] duration metric: took 194.645917ms for default service account to be created ...
	I1004 03:20:04.133132   30630 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 03:20:04.328598   30630 request.go:632] Waited for 195.393579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:04.328702   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:04.328730   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:04.328744   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:04.328753   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:04.333188   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:20:04.337805   30630 system_pods.go:86] 17 kube-system pods found
	I1004 03:20:04.337832   30630 system_pods.go:89] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:20:04.337838   30630 system_pods.go:89] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:20:04.337842   30630 system_pods.go:89] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:20:04.337848   30630 system_pods.go:89] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:20:04.337851   30630 system_pods.go:89] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:20:04.337855   30630 system_pods.go:89] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:20:04.337859   30630 system_pods.go:89] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:20:04.337863   30630 system_pods.go:89] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:20:04.337867   30630 system_pods.go:89] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:20:04.337874   30630 system_pods.go:89] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:20:04.337878   30630 system_pods.go:89] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:20:04.337885   30630 system_pods.go:89] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:20:04.337889   30630 system_pods.go:89] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:20:04.337901   30630 system_pods.go:89] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:20:04.337904   30630 system_pods.go:89] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:20:04.337907   30630 system_pods.go:89] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:20:04.337912   30630 system_pods.go:89] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:20:04.337921   30630 system_pods.go:126] duration metric: took 204.78361ms to wait for k8s-apps to be running ...
	I1004 03:20:04.337930   30630 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 03:20:04.337975   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:20:04.352705   30630 system_svc.go:56] duration metric: took 14.766178ms WaitForService to wait for kubelet
	I1004 03:20:04.352728   30630 kubeadm.go:582] duration metric: took 21.111850874s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:20:04.352744   30630 node_conditions.go:102] verifying NodePressure condition ...
	I1004 03:20:04.528049   30630 request.go:632] Waited for 175.240806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes
	I1004 03:20:04.528140   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes
	I1004 03:20:04.528148   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:04.528158   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:04.528166   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:04.532040   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:04.532645   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:20:04.532668   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:20:04.532682   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:20:04.532689   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:20:04.532696   30630 node_conditions.go:105] duration metric: took 179.947049ms to run NodePressure ...
	I1004 03:20:04.532711   30630 start.go:241] waiting for startup goroutines ...
	I1004 03:20:04.532748   30630 start.go:255] writing updated cluster config ...
	I1004 03:20:04.534798   30630 out.go:201] 
	I1004 03:20:04.536250   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:04.536346   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:20:04.537713   30630 out.go:177] * Starting "ha-994751-m03" control-plane node in "ha-994751" cluster
	I1004 03:20:04.538772   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:20:04.538791   30630 cache.go:56] Caching tarball of preloaded images
	I1004 03:20:04.538881   30630 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:20:04.538892   30630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:20:04.538970   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:20:04.539124   30630 start.go:360] acquireMachinesLock for ha-994751-m03: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:20:04.539179   30630 start.go:364] duration metric: took 32.772µs to acquireMachinesLock for "ha-994751-m03"
	I1004 03:20:04.539202   30630 start.go:93] Provisioning new machine with config: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:20:04.539327   30630 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1004 03:20:04.540776   30630 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 03:20:04.540857   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:04.540889   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:04.555427   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	I1004 03:20:04.555831   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:04.556364   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:04.556394   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:04.556738   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:04.556921   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:04.557038   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:04.557175   30630 start.go:159] libmachine.API.Create for "ha-994751" (driver="kvm2")
	I1004 03:20:04.557204   30630 client.go:168] LocalClient.Create starting
	I1004 03:20:04.557233   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 03:20:04.557271   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:20:04.557291   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:20:04.557375   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 03:20:04.557421   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:20:04.557449   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:20:04.557481   30630 main.go:141] libmachine: Running pre-create checks...
	I1004 03:20:04.557495   30630 main.go:141] libmachine: (ha-994751-m03) Calling .PreCreateCheck
	I1004 03:20:04.557705   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetConfigRaw
	I1004 03:20:04.558081   30630 main.go:141] libmachine: Creating machine...
	I1004 03:20:04.558096   30630 main.go:141] libmachine: (ha-994751-m03) Calling .Create
	I1004 03:20:04.558257   30630 main.go:141] libmachine: (ha-994751-m03) Creating KVM machine...
	I1004 03:20:04.559668   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found existing default KVM network
	I1004 03:20:04.559869   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found existing private KVM network mk-ha-994751
	I1004 03:20:04.560039   30630 main.go:141] libmachine: (ha-994751-m03) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03 ...
	I1004 03:20:04.560065   30630 main.go:141] libmachine: (ha-994751-m03) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 03:20:04.560110   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:04.560016   31400 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:20:04.560192   30630 main.go:141] libmachine: (ha-994751-m03) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 03:20:04.808276   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:04.808145   31400 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa...
	I1004 03:20:05.005812   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:05.005703   31400 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/ha-994751-m03.rawdisk...
	I1004 03:20:05.005838   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Writing magic tar header
	I1004 03:20:05.005848   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Writing SSH key tar header
	I1004 03:20:05.005856   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:05.005807   31400 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03 ...
	I1004 03:20:05.005932   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03
	I1004 03:20:05.005971   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 03:20:05.006001   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03 (perms=drwx------)
	I1004 03:20:05.006011   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:20:05.006021   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 03:20:05.006034   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 03:20:05.006047   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 03:20:05.006063   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 03:20:05.006075   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 03:20:05.006086   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 03:20:05.006100   30630 main.go:141] libmachine: (ha-994751-m03) Creating domain...
	I1004 03:20:05.006109   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 03:20:05.006122   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins
	I1004 03:20:05.006135   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home
	I1004 03:20:05.006147   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Skipping /home - not owner
	I1004 03:20:05.007092   30630 main.go:141] libmachine: (ha-994751-m03) define libvirt domain using xml: 
	I1004 03:20:05.007116   30630 main.go:141] libmachine: (ha-994751-m03) <domain type='kvm'>
	I1004 03:20:05.007126   30630 main.go:141] libmachine: (ha-994751-m03)   <name>ha-994751-m03</name>
	I1004 03:20:05.007139   30630 main.go:141] libmachine: (ha-994751-m03)   <memory unit='MiB'>2200</memory>
	I1004 03:20:05.007151   30630 main.go:141] libmachine: (ha-994751-m03)   <vcpu>2</vcpu>
	I1004 03:20:05.007158   30630 main.go:141] libmachine: (ha-994751-m03)   <features>
	I1004 03:20:05.007166   30630 main.go:141] libmachine: (ha-994751-m03)     <acpi/>
	I1004 03:20:05.007173   30630 main.go:141] libmachine: (ha-994751-m03)     <apic/>
	I1004 03:20:05.007177   30630 main.go:141] libmachine: (ha-994751-m03)     <pae/>
	I1004 03:20:05.007183   30630 main.go:141] libmachine: (ha-994751-m03)     
	I1004 03:20:05.007189   30630 main.go:141] libmachine: (ha-994751-m03)   </features>
	I1004 03:20:05.007198   30630 main.go:141] libmachine: (ha-994751-m03)   <cpu mode='host-passthrough'>
	I1004 03:20:05.007205   30630 main.go:141] libmachine: (ha-994751-m03)   
	I1004 03:20:05.007209   30630 main.go:141] libmachine: (ha-994751-m03)   </cpu>
	I1004 03:20:05.007231   30630 main.go:141] libmachine: (ha-994751-m03)   <os>
	I1004 03:20:05.007247   30630 main.go:141] libmachine: (ha-994751-m03)     <type>hvm</type>
	I1004 03:20:05.007256   30630 main.go:141] libmachine: (ha-994751-m03)     <boot dev='cdrom'/>
	I1004 03:20:05.007270   30630 main.go:141] libmachine: (ha-994751-m03)     <boot dev='hd'/>
	I1004 03:20:05.007282   30630 main.go:141] libmachine: (ha-994751-m03)     <bootmenu enable='no'/>
	I1004 03:20:05.007301   30630 main.go:141] libmachine: (ha-994751-m03)   </os>
	I1004 03:20:05.007312   30630 main.go:141] libmachine: (ha-994751-m03)   <devices>
	I1004 03:20:05.007323   30630 main.go:141] libmachine: (ha-994751-m03)     <disk type='file' device='cdrom'>
	I1004 03:20:05.007339   30630 main.go:141] libmachine: (ha-994751-m03)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/boot2docker.iso'/>
	I1004 03:20:05.007353   30630 main.go:141] libmachine: (ha-994751-m03)       <target dev='hdc' bus='scsi'/>
	I1004 03:20:05.007365   30630 main.go:141] libmachine: (ha-994751-m03)       <readonly/>
	I1004 03:20:05.007373   30630 main.go:141] libmachine: (ha-994751-m03)     </disk>
	I1004 03:20:05.007385   30630 main.go:141] libmachine: (ha-994751-m03)     <disk type='file' device='disk'>
	I1004 03:20:05.007397   30630 main.go:141] libmachine: (ha-994751-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 03:20:05.007412   30630 main.go:141] libmachine: (ha-994751-m03)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/ha-994751-m03.rawdisk'/>
	I1004 03:20:05.007427   30630 main.go:141] libmachine: (ha-994751-m03)       <target dev='hda' bus='virtio'/>
	I1004 03:20:05.007439   30630 main.go:141] libmachine: (ha-994751-m03)     </disk>
	I1004 03:20:05.007448   30630 main.go:141] libmachine: (ha-994751-m03)     <interface type='network'>
	I1004 03:20:05.007465   30630 main.go:141] libmachine: (ha-994751-m03)       <source network='mk-ha-994751'/>
	I1004 03:20:05.007474   30630 main.go:141] libmachine: (ha-994751-m03)       <model type='virtio'/>
	I1004 03:20:05.007484   30630 main.go:141] libmachine: (ha-994751-m03)     </interface>
	I1004 03:20:05.007498   30630 main.go:141] libmachine: (ha-994751-m03)     <interface type='network'>
	I1004 03:20:05.007510   30630 main.go:141] libmachine: (ha-994751-m03)       <source network='default'/>
	I1004 03:20:05.007520   30630 main.go:141] libmachine: (ha-994751-m03)       <model type='virtio'/>
	I1004 03:20:05.007530   30630 main.go:141] libmachine: (ha-994751-m03)     </interface>
	I1004 03:20:05.007540   30630 main.go:141] libmachine: (ha-994751-m03)     <serial type='pty'>
	I1004 03:20:05.007550   30630 main.go:141] libmachine: (ha-994751-m03)       <target port='0'/>
	I1004 03:20:05.007559   30630 main.go:141] libmachine: (ha-994751-m03)     </serial>
	I1004 03:20:05.007576   30630 main.go:141] libmachine: (ha-994751-m03)     <console type='pty'>
	I1004 03:20:05.007591   30630 main.go:141] libmachine: (ha-994751-m03)       <target type='serial' port='0'/>
	I1004 03:20:05.007600   30630 main.go:141] libmachine: (ha-994751-m03)     </console>
	I1004 03:20:05.007608   30630 main.go:141] libmachine: (ha-994751-m03)     <rng model='virtio'>
	I1004 03:20:05.007614   30630 main.go:141] libmachine: (ha-994751-m03)       <backend model='random'>/dev/random</backend>
	I1004 03:20:05.007620   30630 main.go:141] libmachine: (ha-994751-m03)     </rng>
	I1004 03:20:05.007628   30630 main.go:141] libmachine: (ha-994751-m03)     
	I1004 03:20:05.007636   30630 main.go:141] libmachine: (ha-994751-m03)     
	I1004 03:20:05.007652   30630 main.go:141] libmachine: (ha-994751-m03)   </devices>
	I1004 03:20:05.007666   30630 main.go:141] libmachine: (ha-994751-m03) </domain>
	I1004 03:20:05.007678   30630 main.go:141] libmachine: (ha-994751-m03) 
	I1004 03:20:05.014475   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:d0:97:18 in network default
	I1004 03:20:05.015005   30630 main.go:141] libmachine: (ha-994751-m03) Ensuring networks are active...
	I1004 03:20:05.015041   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:05.015645   30630 main.go:141] libmachine: (ha-994751-m03) Ensuring network default is active
	I1004 03:20:05.015928   30630 main.go:141] libmachine: (ha-994751-m03) Ensuring network mk-ha-994751 is active
	I1004 03:20:05.016249   30630 main.go:141] libmachine: (ha-994751-m03) Getting domain xml...
	I1004 03:20:05.016929   30630 main.go:141] libmachine: (ha-994751-m03) Creating domain...
	I1004 03:20:06.261440   30630 main.go:141] libmachine: (ha-994751-m03) Waiting to get IP...
	I1004 03:20:06.262071   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:06.262414   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:06.262472   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:06.262421   31400 retry.go:31] will retry after 250.348601ms: waiting for machine to come up
	I1004 03:20:06.515070   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:06.515535   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:06.515565   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:06.515468   31400 retry.go:31] will retry after 243.422578ms: waiting for machine to come up
	I1004 03:20:06.760919   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:06.761413   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:06.761440   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:06.761366   31400 retry.go:31] will retry after 323.138496ms: waiting for machine to come up
	I1004 03:20:07.085754   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:07.086220   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:07.086254   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:07.086174   31400 retry.go:31] will retry after 589.608599ms: waiting for machine to come up
	I1004 03:20:07.676793   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:07.677255   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:07.677277   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:07.677220   31400 retry.go:31] will retry after 686.955192ms: waiting for machine to come up
	I1004 03:20:08.365977   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:08.366366   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:08.366390   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:08.366322   31400 retry.go:31] will retry after 861.927469ms: waiting for machine to come up
	I1004 03:20:09.229974   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:09.230402   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:09.230431   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:09.230354   31400 retry.go:31] will retry after 766.03024ms: waiting for machine to come up
	I1004 03:20:09.997533   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:09.997938   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:09.997963   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:09.997907   31400 retry.go:31] will retry after 980.127757ms: waiting for machine to come up
	I1004 03:20:10.979306   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:10.979718   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:10.979743   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:10.979684   31400 retry.go:31] will retry after 1.544904084s: waiting for machine to come up
	I1004 03:20:12.525854   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:12.526225   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:12.526249   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:12.526177   31400 retry.go:31] will retry after 1.432028005s: waiting for machine to come up
	I1004 03:20:13.960907   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:13.961388   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:13.961415   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:13.961367   31400 retry.go:31] will retry after 1.927604807s: waiting for machine to come up
	I1004 03:20:15.890697   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:15.891148   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:15.891175   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:15.891091   31400 retry.go:31] will retry after 3.506356031s: waiting for machine to come up
	I1004 03:20:19.400810   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:19.401322   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:19.401349   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:19.401272   31400 retry.go:31] will retry after 3.367410839s: waiting for machine to come up
	I1004 03:20:22.769867   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:22.770373   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:22.770407   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:22.770302   31400 retry.go:31] will retry after 5.266869096s: waiting for machine to come up
	I1004 03:20:28.041532   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:28.041995   30630 main.go:141] libmachine: (ha-994751-m03) Found IP for machine: 192.168.39.53
	I1004 03:20:28.042014   30630 main.go:141] libmachine: (ha-994751-m03) Reserving static IP address...
	I1004 03:20:28.042026   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has current primary IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:28.042375   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find host DHCP lease matching {name: "ha-994751-m03", mac: "52:54:00:49:76:ea", ip: "192.168.39.53"} in network mk-ha-994751
	I1004 03:20:28.115076   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Getting to WaitForSSH function...
	I1004 03:20:28.115105   30630 main.go:141] libmachine: (ha-994751-m03) Reserved static IP address: 192.168.39.53
	I1004 03:20:28.115145   30630 main.go:141] libmachine: (ha-994751-m03) Waiting for SSH to be available...
	I1004 03:20:28.117390   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:28.117662   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751
	I1004 03:20:28.117678   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find defined IP address of network mk-ha-994751 interface with MAC address 52:54:00:49:76:ea
	I1004 03:20:28.117841   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH client type: external
	I1004 03:20:28.117866   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa (-rw-------)
	I1004 03:20:28.117909   30630 main.go:141] libmachine: (ha-994751-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:20:28.117924   30630 main.go:141] libmachine: (ha-994751-m03) DBG | About to run SSH command:
	I1004 03:20:28.117940   30630 main.go:141] libmachine: (ha-994751-m03) DBG | exit 0
	I1004 03:20:28.121632   30630 main.go:141] libmachine: (ha-994751-m03) DBG | SSH cmd err, output: exit status 255: 
	I1004 03:20:28.121657   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1004 03:20:28.121668   30630 main.go:141] libmachine: (ha-994751-m03) DBG | command : exit 0
	I1004 03:20:28.121677   30630 main.go:141] libmachine: (ha-994751-m03) DBG | err     : exit status 255
	I1004 03:20:28.121690   30630 main.go:141] libmachine: (ha-994751-m03) DBG | output  : 
	I1004 03:20:31.123157   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Getting to WaitForSSH function...
	I1004 03:20:31.125515   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.125954   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.125981   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.126121   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH client type: external
	I1004 03:20:31.126148   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa (-rw-------)
	I1004 03:20:31.126175   30630 main.go:141] libmachine: (ha-994751-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:20:31.126186   30630 main.go:141] libmachine: (ha-994751-m03) DBG | About to run SSH command:
	I1004 03:20:31.126199   30630 main.go:141] libmachine: (ha-994751-m03) DBG | exit 0
	I1004 03:20:31.255788   30630 main.go:141] libmachine: (ha-994751-m03) DBG | SSH cmd err, output: <nil>: 
	I1004 03:20:31.256048   30630 main.go:141] libmachine: (ha-994751-m03) KVM machine creation complete!
	I1004 03:20:31.256416   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetConfigRaw
	I1004 03:20:31.257001   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:31.257196   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:31.257537   30630 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 03:20:31.257552   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetState
	I1004 03:20:31.258954   30630 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 03:20:31.258966   30630 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 03:20:31.258972   30630 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 03:20:31.258978   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.261065   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.261407   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.261432   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.261523   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.261696   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.261827   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.261939   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.262104   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.262338   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.262354   30630 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 03:20:31.371392   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:20:31.371421   30630 main.go:141] libmachine: Detecting the provisioner...
	I1004 03:20:31.371431   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.374360   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.374677   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.374703   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.374874   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.375093   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.375299   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.375463   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.375637   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.375858   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.375873   30630 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 03:20:31.489043   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 03:20:31.489093   30630 main.go:141] libmachine: found compatible host: buildroot
	I1004 03:20:31.489100   30630 main.go:141] libmachine: Provisioning with buildroot...
	I1004 03:20:31.489107   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:31.489333   30630 buildroot.go:166] provisioning hostname "ha-994751-m03"
	I1004 03:20:31.489357   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:31.489534   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.492101   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.492553   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.492573   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.492727   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.492907   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.493039   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.493147   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.493277   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.493442   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.493453   30630 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-994751-m03 && echo "ha-994751-m03" | sudo tee /etc/hostname
	I1004 03:20:31.626029   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751-m03
	
	I1004 03:20:31.626058   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.628598   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.629032   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.629055   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.629247   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.629454   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.629599   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.629757   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.629901   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.630075   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.630108   30630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-994751-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-994751-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-994751-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:20:31.754855   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:20:31.754886   30630 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:20:31.754923   30630 buildroot.go:174] setting up certificates
	I1004 03:20:31.754934   30630 provision.go:84] configureAuth start
	I1004 03:20:31.754946   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:31.755194   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:31.757747   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.758065   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.758087   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.758193   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.760414   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.760746   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.760771   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.760844   30630 provision.go:143] copyHostCerts
	I1004 03:20:31.760875   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:20:31.760907   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:20:31.760915   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:20:31.760984   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:20:31.761064   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:20:31.761082   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:20:31.761088   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:20:31.761114   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:20:31.761166   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:20:31.761182   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:20:31.761188   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:20:31.761214   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:20:31.761271   30630 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.ha-994751-m03 san=[127.0.0.1 192.168.39.53 ha-994751-m03 localhost minikube]
	I1004 03:20:31.828214   30630 provision.go:177] copyRemoteCerts
	I1004 03:20:31.828263   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:20:31.828283   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.830707   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.831047   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.831078   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.831192   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.831375   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.831522   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.831636   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:31.917792   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:20:31.917859   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:20:31.943534   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:20:31.943606   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 03:20:31.968990   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:20:31.969060   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:20:31.992331   30630 provision.go:87] duration metric: took 237.384107ms to configureAuth
	I1004 03:20:31.992362   30630 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:20:31.992622   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:31.992738   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.995570   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.995946   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.995975   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.996126   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.996306   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.996434   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.996569   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.996677   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.996863   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.996880   30630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:20:32.229026   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:20:32.229061   30630 main.go:141] libmachine: Checking connection to Docker...
	I1004 03:20:32.229071   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetURL
	I1004 03:20:32.230237   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using libvirt version 6000000
	I1004 03:20:32.232533   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.232839   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.232870   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.233012   30630 main.go:141] libmachine: Docker is up and running!
	I1004 03:20:32.233029   30630 main.go:141] libmachine: Reticulating splines...
	I1004 03:20:32.233037   30630 client.go:171] duration metric: took 27.675822366s to LocalClient.Create
	I1004 03:20:32.233061   30630 start.go:167] duration metric: took 27.675885367s to libmachine.API.Create "ha-994751"
	I1004 03:20:32.233071   30630 start.go:293] postStartSetup for "ha-994751-m03" (driver="kvm2")
	I1004 03:20:32.233080   30630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:20:32.233096   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.233315   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:20:32.233341   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:32.235889   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.236270   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.236297   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.236452   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.236641   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.236787   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.236936   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:32.321827   30630 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:20:32.326129   30630 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:20:32.326152   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:20:32.326232   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:20:32.326328   30630 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:20:32.326339   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:20:32.326421   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:20:32.336376   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:20:32.359653   30630 start.go:296] duration metric: took 126.571809ms for postStartSetup
	I1004 03:20:32.359721   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetConfigRaw
	I1004 03:20:32.360268   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:32.362856   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.363243   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.363268   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.363469   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:20:32.363663   30630 start.go:128] duration metric: took 27.824325438s to createHost
	I1004 03:20:32.363686   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:32.365882   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.366210   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.366226   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.366350   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.366523   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.366674   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.366824   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.366985   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:32.367180   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:32.367194   30630 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:20:32.480703   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728012032.461011085
	
	I1004 03:20:32.480725   30630 fix.go:216] guest clock: 1728012032.461011085
	I1004 03:20:32.480735   30630 fix.go:229] Guest: 2024-10-04 03:20:32.461011085 +0000 UTC Remote: 2024-10-04 03:20:32.363675 +0000 UTC m=+146.676506004 (delta=97.336085ms)
	I1004 03:20:32.480753   30630 fix.go:200] guest clock delta is within tolerance: 97.336085ms
	I1004 03:20:32.480760   30630 start.go:83] releasing machines lock for "ha-994751-m03", held for 27.941569364s
	I1004 03:20:32.480780   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.480989   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:32.483796   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.484159   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.484191   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.486391   30630 out.go:177] * Found network options:
	I1004 03:20:32.487654   30630 out.go:177]   - NO_PROXY=192.168.39.65,192.168.39.117
	W1004 03:20:32.488913   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	W1004 03:20:32.488946   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:20:32.488964   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.489521   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.489776   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.489869   30630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:20:32.489906   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	W1004 03:20:32.489985   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	W1004 03:20:32.490009   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:20:32.490068   30630 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:20:32.490090   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:32.492646   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.492900   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.493125   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.493149   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.493245   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.493267   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.493334   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.493500   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.493556   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.493707   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.493736   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.493920   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:32.493987   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.494105   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:32.742057   30630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 03:20:32.749338   30630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:20:32.749392   30630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:20:32.765055   30630 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 03:20:32.765079   30630 start.go:495] detecting cgroup driver to use...
	I1004 03:20:32.765139   30630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:20:32.780546   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:20:32.797729   30630 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:20:32.797789   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:20:32.810917   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:20:32.823880   30630 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:20:32.941749   30630 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:20:33.094803   30630 docker.go:233] disabling docker service ...
	I1004 03:20:33.094875   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:20:33.108945   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:20:33.122238   30630 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:20:33.259499   30630 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:20:33.382162   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:20:33.399956   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:20:33.419077   30630 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:20:33.419147   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.431123   30630 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:20:33.431176   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.442393   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.454523   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.465583   30630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:20:33.477059   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.487953   30630 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.505077   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.515522   30630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:20:33.526537   30630 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 03:20:33.526592   30630 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 03:20:33.540307   30630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:20:33.550485   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:20:33.660459   30630 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:20:33.759769   30630 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:20:33.759862   30630 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:20:33.764677   30630 start.go:563] Will wait 60s for crictl version
	I1004 03:20:33.764728   30630 ssh_runner.go:195] Run: which crictl
	I1004 03:20:33.768748   30630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:20:33.815756   30630 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:20:33.815849   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:20:33.843604   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:20:33.875395   30630 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:20:33.876902   30630 out.go:177]   - env NO_PROXY=192.168.39.65
	I1004 03:20:33.878202   30630 out.go:177]   - env NO_PROXY=192.168.39.65,192.168.39.117
	I1004 03:20:33.879354   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:33.881763   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:33.882075   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:33.882116   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:33.882282   30630 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:20:33.887016   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:20:33.900617   30630 mustload.go:65] Loading cluster: ha-994751
	I1004 03:20:33.900859   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:33.901101   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:33.901139   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:33.916080   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40599
	I1004 03:20:33.916545   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:33.917019   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:33.917038   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:33.917311   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:33.917490   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:20:33.918758   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:20:33.919091   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:33.919127   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:33.934895   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I1004 03:20:33.935369   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:33.935847   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:33.935870   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:33.936191   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:33.936373   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:20:33.936519   30630 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751 for IP: 192.168.39.53
	I1004 03:20:33.936531   30630 certs.go:194] generating shared ca certs ...
	I1004 03:20:33.936550   30630 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:20:33.936692   30630 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:20:33.936742   30630 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:20:33.936754   30630 certs.go:256] generating profile certs ...
	I1004 03:20:33.936848   30630 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key
	I1004 03:20:33.936877   30630 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21
	I1004 03:20:33.936895   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65 192.168.39.117 192.168.39.53 192.168.39.254]
	I1004 03:20:34.019919   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21 ...
	I1004 03:20:34.019948   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21: {Name:mk35ee00bf994088c6b50391189f3e324fc0101b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:20:34.020103   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21 ...
	I1004 03:20:34.020114   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21: {Name:mk408ba3330d2e90d98d309cc86d9e5e670f9570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:20:34.020180   30630 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt
	I1004 03:20:34.020296   30630 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key
	I1004 03:20:34.020411   30630 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key
	I1004 03:20:34.020425   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:20:34.020438   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:20:34.020452   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:20:34.020465   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:20:34.020477   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:20:34.020489   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:20:34.020501   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:20:34.035820   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:20:34.035890   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:20:34.035926   30630 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:20:34.035946   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:20:34.035969   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:20:34.035990   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:20:34.036010   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:20:34.036045   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:20:34.036074   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.036087   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.036100   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.036130   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:20:34.039080   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:34.039469   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:20:34.039485   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:34.039662   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:20:34.039893   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:20:34.040036   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:20:34.040151   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:20:34.112207   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1004 03:20:34.117935   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1004 03:20:34.131114   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1004 03:20:34.136170   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1004 03:20:34.149066   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1004 03:20:34.153717   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1004 03:20:34.167750   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1004 03:20:34.172288   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1004 03:20:34.184761   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1004 03:20:34.189707   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1004 03:20:34.201792   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1004 03:20:34.206305   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1004 03:20:34.218091   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:20:34.243235   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:20:34.267642   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:20:34.291741   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:20:34.317056   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1004 03:20:34.340832   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 03:20:34.364951   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:20:34.392565   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:20:34.419461   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:20:34.444597   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:20:34.470026   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:20:34.495443   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1004 03:20:34.513085   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1004 03:20:34.530602   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1004 03:20:34.548064   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1004 03:20:34.565179   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1004 03:20:34.582199   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1004 03:20:34.599469   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1004 03:20:34.617008   30630 ssh_runner.go:195] Run: openssl version
	I1004 03:20:34.623238   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:20:34.635851   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.641242   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.641300   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.647354   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:20:34.660625   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:20:34.673563   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.678872   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.678918   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.685228   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:20:34.696965   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:20:34.708173   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.712666   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.712728   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.718347   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:20:34.729423   30630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:20:34.733599   30630 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 03:20:34.733645   30630 kubeadm.go:934] updating node {m03 192.168.39.53 8443 v1.31.1 crio true true} ...
	I1004 03:20:34.733734   30630 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-994751-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:20:34.733759   30630 kube-vip.go:115] generating kube-vip config ...
	I1004 03:20:34.733788   30630 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1004 03:20:34.753104   30630 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1004 03:20:34.753160   30630 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:20:34.753207   30630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:20:34.764605   30630 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1004 03:20:34.764653   30630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1004 03:20:34.776026   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1004 03:20:34.776058   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:20:34.776073   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1004 03:20:34.776077   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1004 03:20:34.776094   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:20:34.776111   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:20:34.776123   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:20:34.776154   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:20:34.784508   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1004 03:20:34.784532   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1004 03:20:34.784546   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1004 03:20:34.784554   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1004 03:20:34.816412   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:20:34.816537   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:20:34.932259   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1004 03:20:34.932304   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1004 03:20:35.665849   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1004 03:20:35.676114   30630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1004 03:20:35.694028   30630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:20:35.718864   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1004 03:20:35.736291   30630 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:20:35.740907   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:20:35.753115   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:20:35.870874   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:20:35.888175   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:20:35.888614   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:35.888675   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:35.903712   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33367
	I1004 03:20:35.904202   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:35.904676   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:35.904700   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:35.904994   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:35.905194   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:20:35.905357   30630 start.go:317] joinCluster: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:20:35.905474   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1004 03:20:35.905495   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:20:35.908275   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:35.908713   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:20:35.908739   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:35.908875   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:20:35.909047   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:20:35.909173   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:20:35.909303   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:20:36.083592   30630 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:20:36.083645   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e5abq7.epvk18yjfmjj0i7x --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I1004 03:20:57.688048   30630 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e5abq7.epvk18yjfmjj0i7x --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (21.604380186s)
	I1004 03:20:57.688081   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1004 03:20:58.272843   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-994751-m03 minikube.k8s.io/updated_at=2024_10_04T03_20_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-994751 minikube.k8s.io/primary=false
	I1004 03:20:58.405355   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-994751-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1004 03:20:58.529681   30630 start.go:319] duration metric: took 22.624319783s to joinCluster
	I1004 03:20:58.529762   30630 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:20:58.530014   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:58.531345   30630 out.go:177] * Verifying Kubernetes components...
	I1004 03:20:58.532710   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:20:58.800802   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:20:58.844203   30630 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:20:58.844571   30630 kapi.go:59] client config for ha-994751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key", CAFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1004 03:20:58.844645   30630 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.65:8443
	I1004 03:20:58.844892   30630 node_ready.go:35] waiting up to 6m0s for node "ha-994751-m03" to be "Ready" ...
	I1004 03:20:58.844972   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:20:58.844982   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:58.844998   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:58.845007   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:58.848088   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:59.345094   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:20:59.345120   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:59.345130   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:59.345135   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:59.353141   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:20:59.845733   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:20:59.845805   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:59.845823   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:59.845832   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:59.850171   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:00.345129   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:00.345150   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:00.345159   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:00.345163   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:00.348609   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:00.845173   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:00.845196   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:00.845205   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:00.845210   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:00.850207   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:00.851383   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:01.345051   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:01.345072   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:01.345079   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:01.345083   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:01.349207   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:01.845336   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:01.845357   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:01.845364   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:01.845369   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:01.848367   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:02.345495   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:02.345521   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:02.345529   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:02.345534   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:02.349838   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:02.845704   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:02.845732   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:02.845745   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:02.845752   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:02.849074   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:03.345450   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:03.345472   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:03.345480   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:03.345484   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:03.349082   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:03.349671   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:03.846035   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:03.846061   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:03.846072   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:03.846079   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:03.850455   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:04.345156   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:04.345183   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:04.345191   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:04.345196   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:04.349346   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:04.845676   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:04.845695   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:04.845702   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:04.845707   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:04.849977   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:05.345993   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:05.346019   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:05.346028   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:05.346032   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:05.350487   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:05.352077   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:05.845454   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:05.845473   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:05.845486   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:05.845493   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:05.848902   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:06.345394   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:06.345416   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:06.345424   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:06.345428   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:06.348963   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:06.846045   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:06.846066   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:06.846077   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:06.846084   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:06.849291   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:07.345224   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:07.345249   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:07.345258   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:07.345261   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:07.348950   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:07.845773   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:07.845797   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:07.845807   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:07.845812   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:07.853790   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:21:07.854460   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:08.345396   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:08.345417   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:08.345425   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:08.345430   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:08.348967   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:08.845960   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:08.845987   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:08.845998   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:08.846004   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:08.849592   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:09.345163   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:09.345187   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:09.345195   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:09.345199   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:09.348412   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:09.845700   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:09.845720   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:09.845727   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:09.845732   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:09.848850   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:10.346002   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:10.346024   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:10.346036   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:10.346041   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:10.349778   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:10.350421   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:10.845273   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:10.845342   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:10.845357   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:10.845364   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:10.849249   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:11.345450   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:11.345474   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:11.345485   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:11.345490   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:11.348615   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:11.845521   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:11.845544   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:11.845552   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:11.845557   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:11.851020   30630 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 03:21:12.345427   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:12.345455   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:12.345466   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:12.345473   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:12.348894   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:12.845773   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:12.845807   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:12.845815   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:12.845821   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:12.849096   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:12.849859   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:13.345600   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:13.345625   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:13.345635   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:13.345641   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:13.348986   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:13.845088   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:13.845115   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:13.845122   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:13.845126   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:13.848813   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:14.345772   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:14.345796   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:14.345804   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:14.345809   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:14.349538   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:14.845967   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:14.845999   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:14.846010   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:14.846015   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:14.849646   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:14.850106   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:15.345479   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:15.345501   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:15.345509   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:15.345514   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:15.348633   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:15.845308   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:15.845329   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:15.845337   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:15.845342   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:15.848613   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.345615   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:16.345635   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.345697   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.345709   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.349189   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.845211   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:16.845234   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.845243   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.845247   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.848314   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.848965   30630 node_ready.go:49] node "ha-994751-m03" has status "Ready":"True"
	I1004 03:21:16.848983   30630 node_ready.go:38] duration metric: took 18.004075427s for node "ha-994751-m03" to be "Ready" ...
	I1004 03:21:16.848993   30630 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:21:16.849057   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:16.849066   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.849073   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.849077   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.855878   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:21:16.863339   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.863413   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-l6zst
	I1004 03:21:16.863421   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.863428   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.863432   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.866627   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.867225   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:16.867246   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.867254   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.867257   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.869745   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.870174   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.870189   30630 pod_ready.go:82] duration metric: took 6.828744ms for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.870197   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.870257   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgdck
	I1004 03:21:16.870266   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.870272   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.870277   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.872665   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.873280   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:16.873293   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.873300   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.873304   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.875767   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.876277   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.876299   30630 pod_ready.go:82] duration metric: took 6.094854ms for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.876312   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.876381   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751
	I1004 03:21:16.876394   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.876405   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.876415   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.878641   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.879297   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:16.879315   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.879323   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.879330   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.881505   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.881911   30630 pod_ready.go:93] pod "etcd-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.881925   30630 pod_ready.go:82] duration metric: took 5.606429ms for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.881933   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.881973   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m02
	I1004 03:21:16.881980   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.881986   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.881991   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.884217   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.884882   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:16.884896   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.884903   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.884907   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.887109   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.887576   30630 pod_ready.go:93] pod "etcd-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.887592   30630 pod_ready.go:82] duration metric: took 5.65336ms for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.887600   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.046004   30630 request.go:632] Waited for 158.354973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m03
	I1004 03:21:17.046081   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m03
	I1004 03:21:17.046092   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.046103   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.046113   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.049599   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.245822   30630 request.go:632] Waited for 195.387196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:17.245913   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:17.245920   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.245929   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.245937   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.249684   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.250373   30630 pod_ready.go:93] pod "etcd-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:17.250391   30630 pod_ready.go:82] duration metric: took 362.785163ms for pod "etcd-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.250406   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.445530   30630 request.go:632] Waited for 195.055856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:21:17.445608   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:21:17.445614   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.445621   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.445627   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.449209   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.645177   30630 request.go:632] Waited for 195.266127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:17.645277   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:17.645290   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.645300   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.645307   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.648339   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.648978   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:17.648997   30630 pod_ready.go:82] duration metric: took 398.583614ms for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.649010   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.845996   30630 request.go:632] Waited for 196.900731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:21:17.846073   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:21:17.846082   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.846092   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.846097   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.849729   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.045771   30630 request.go:632] Waited for 195.364695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:18.045824   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:18.045829   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.045837   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.045843   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.049741   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.050457   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:18.050479   30630 pod_ready.go:82] duration metric: took 401.458645ms for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.050491   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.245708   30630 request.go:632] Waited for 195.123371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m03
	I1004 03:21:18.245779   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m03
	I1004 03:21:18.245788   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.245798   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.245805   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.248803   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:18.445802   30630 request.go:632] Waited for 196.359557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:18.445880   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:18.445891   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.445903   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.445912   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.449153   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.449859   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:18.449875   30630 pod_ready.go:82] duration metric: took 399.376745ms for pod "kube-apiserver-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.449884   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.646109   30630 request.go:632] Waited for 196.148252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:21:18.646174   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:21:18.646181   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.646190   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.646196   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.649910   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.845959   30630 request.go:632] Waited for 195.355273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:18.846052   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:18.846066   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.846077   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.846084   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.849452   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.849983   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:18.849999   30630 pod_ready.go:82] duration metric: took 400.109282ms for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.850007   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.045892   30630 request.go:632] Waited for 195.812536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:21:19.045949   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:21:19.045954   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.045962   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.045965   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.049481   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.245703   30630 request.go:632] Waited for 195.37604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:19.245795   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:19.245807   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.245816   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.245821   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.249221   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.249770   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:19.249786   30630 pod_ready.go:82] duration metric: took 399.773598ms for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.249797   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.445959   30630 request.go:632] Waited for 196.084722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m03
	I1004 03:21:19.446017   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m03
	I1004 03:21:19.446023   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.446030   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.446034   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.449595   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.646055   30630 request.go:632] Waited for 195.452676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:19.646103   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:19.646110   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.646121   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.646126   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.649308   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.649980   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:19.650000   30630 pod_ready.go:82] duration metric: took 400.193489ms for pod "kube-controller-manager-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.650010   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9q6q2" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.846046   30630 request.go:632] Waited for 195.979747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q6q2
	I1004 03:21:19.846103   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q6q2
	I1004 03:21:19.846109   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.846116   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.846121   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.850032   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.045346   30630 request.go:632] Waited for 194.290233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:20.045412   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:20.045419   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.045429   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.045435   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.049187   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.049735   30630 pod_ready.go:93] pod "kube-proxy-9q6q2" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:20.049758   30630 pod_ready.go:82] duration metric: took 399.740576ms for pod "kube-proxy-9q6q2" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.049773   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.245829   30630 request.go:632] Waited for 195.994651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:21:20.245916   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:21:20.245926   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.245933   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.245938   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.248898   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:20.445831   30630 request.go:632] Waited for 196.355752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:20.445904   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:20.445910   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.445921   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.445925   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.449843   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.450548   30630 pod_ready.go:93] pod "kube-proxy-f44b9" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:20.450575   30630 pod_ready.go:82] duration metric: took 400.789271ms for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.450587   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.645991   30630 request.go:632] Waited for 195.320241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:21:20.646051   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:21:20.646061   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.646072   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.646084   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.649526   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.845351   30630 request.go:632] Waited for 195.084601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:20.845415   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:20.845423   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.845433   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.845439   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.849107   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.849683   30630 pod_ready.go:93] pod "kube-proxy-ph6cf" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:20.849702   30630 pod_ready.go:82] duration metric: took 399.106228ms for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.849714   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.046211   30630 request.go:632] Waited for 196.431281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:21:21.046274   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:21:21.046287   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.046297   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.046303   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.049644   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.245652   30630 request.go:632] Waited for 195.357611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:21.245701   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:21.245707   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.245717   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.245729   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.248937   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.249459   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:21.249477   30630 pod_ready.go:82] duration metric: took 399.754955ms for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.249485   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.445624   30630 request.go:632] Waited for 196.058326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:21:21.445695   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:21:21.445700   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.445708   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.445713   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.449658   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.645861   30630 request.go:632] Waited for 195.383024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:21.645947   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:21.645959   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.646444   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.646457   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.649535   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.650129   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:21.650145   30630 pod_ready.go:82] duration metric: took 400.653773ms for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.650155   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.846280   30630 request.go:632] Waited for 196.044885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m03
	I1004 03:21:21.846336   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m03
	I1004 03:21:21.846342   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.846349   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.846354   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.849713   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:22.045755   30630 request.go:632] Waited for 195.414064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:22.045827   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:22.045834   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.045841   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.045847   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.049538   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:22.050359   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:22.050378   30630 pod_ready.go:82] duration metric: took 400.213357ms for pod "kube-scheduler-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:22.050389   30630 pod_ready.go:39] duration metric: took 5.201387664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:21:22.050412   30630 api_server.go:52] waiting for apiserver process to appear ...
	I1004 03:21:22.050477   30630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:21:22.066998   30630 api_server.go:72] duration metric: took 23.53720299s to wait for apiserver process to appear ...
	I1004 03:21:22.067023   30630 api_server.go:88] waiting for apiserver healthz status ...
	I1004 03:21:22.067042   30630 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I1004 03:21:22.074791   30630 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I1004 03:21:22.074864   30630 round_trippers.go:463] GET https://192.168.39.65:8443/version
	I1004 03:21:22.074872   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.074885   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.074896   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.075865   30630 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1004 03:21:22.075921   30630 api_server.go:141] control plane version: v1.31.1
	I1004 03:21:22.075934   30630 api_server.go:131] duration metric: took 8.905409ms to wait for apiserver health ...
	I1004 03:21:22.075941   30630 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 03:21:22.245389   30630 request.go:632] Waited for 169.386949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.245481   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.245490   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.245505   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.245516   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.251617   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:21:22.258944   30630 system_pods.go:59] 24 kube-system pods found
	I1004 03:21:22.258969   30630 system_pods.go:61] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:21:22.258974   30630 system_pods.go:61] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:21:22.258980   30630 system_pods.go:61] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:21:22.258984   30630 system_pods.go:61] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:21:22.258987   30630 system_pods.go:61] "etcd-ha-994751-m03" [610c4e0c-9af8-441e-9524-ccd6fe6fe390] Running
	I1004 03:21:22.258990   30630 system_pods.go:61] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:21:22.258992   30630 system_pods.go:61] "kindnet-clt5p" [a904ebc8-f149-4b9f-9637-a37cb56af836] Running
	I1004 03:21:22.258994   30630 system_pods.go:61] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:21:22.258997   30630 system_pods.go:61] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:21:22.259012   30630 system_pods.go:61] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:21:22.259017   30630 system_pods.go:61] "kube-apiserver-ha-994751-m03" [42150ae1-b298-4974-976f-05e9a2a32154] Running
	I1004 03:21:22.259020   30630 system_pods.go:61] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:21:22.259023   30630 system_pods.go:61] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:21:22.259027   30630 system_pods.go:61] "kube-controller-manager-ha-994751-m03" [5897468d-7872-4fed-81bc-bf9b37e42ef4] Running
	I1004 03:21:22.259030   30630 system_pods.go:61] "kube-proxy-9q6q2" [a3b96ca0-fe8c-4492-a05c-5f8ff9cb8d3f] Running
	I1004 03:21:22.259033   30630 system_pods.go:61] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:21:22.259036   30630 system_pods.go:61] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:21:22.259039   30630 system_pods.go:61] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:21:22.259042   30630 system_pods.go:61] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:21:22.259046   30630 system_pods.go:61] "kube-scheduler-ha-994751-m03" [f53fda60-a075-4f78-a64b-52e960a4b28b] Running
	I1004 03:21:22.259048   30630 system_pods.go:61] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:21:22.259051   30630 system_pods.go:61] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:21:22.259054   30630 system_pods.go:61] "kube-vip-ha-994751-m03" [9ec22347-f3d6-419e-867a-0de177976203] Running
	I1004 03:21:22.259056   30630 system_pods.go:61] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:21:22.259062   30630 system_pods.go:74] duration metric: took 183.116626ms to wait for pod list to return data ...
	I1004 03:21:22.259072   30630 default_sa.go:34] waiting for default service account to be created ...
	I1004 03:21:22.445504   30630 request.go:632] Waited for 186.355323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:21:22.445557   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:21:22.445563   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.445570   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.445575   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.449437   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:22.449567   30630 default_sa.go:45] found service account: "default"
	I1004 03:21:22.449589   30630 default_sa.go:55] duration metric: took 190.510625ms for default service account to be created ...
	I1004 03:21:22.449599   30630 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 03:21:22.646023   30630 request.go:632] Waited for 196.345892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.646077   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.646096   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.646106   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.646115   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.652169   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:21:22.660351   30630 system_pods.go:86] 24 kube-system pods found
	I1004 03:21:22.660376   30630 system_pods.go:89] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:21:22.660386   30630 system_pods.go:89] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:21:22.660391   30630 system_pods.go:89] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:21:22.660395   30630 system_pods.go:89] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:21:22.660398   30630 system_pods.go:89] "etcd-ha-994751-m03" [610c4e0c-9af8-441e-9524-ccd6fe6fe390] Running
	I1004 03:21:22.660402   30630 system_pods.go:89] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:21:22.660405   30630 system_pods.go:89] "kindnet-clt5p" [a904ebc8-f149-4b9f-9637-a37cb56af836] Running
	I1004 03:21:22.660408   30630 system_pods.go:89] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:21:22.660412   30630 system_pods.go:89] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:21:22.660416   30630 system_pods.go:89] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:21:22.660419   30630 system_pods.go:89] "kube-apiserver-ha-994751-m03" [42150ae1-b298-4974-976f-05e9a2a32154] Running
	I1004 03:21:22.660423   30630 system_pods.go:89] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:21:22.660426   30630 system_pods.go:89] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:21:22.660432   30630 system_pods.go:89] "kube-controller-manager-ha-994751-m03" [5897468d-7872-4fed-81bc-bf9b37e42ef4] Running
	I1004 03:21:22.660437   30630 system_pods.go:89] "kube-proxy-9q6q2" [a3b96ca0-fe8c-4492-a05c-5f8ff9cb8d3f] Running
	I1004 03:21:22.660440   30630 system_pods.go:89] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:21:22.660443   30630 system_pods.go:89] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:21:22.660450   30630 system_pods.go:89] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:21:22.660453   30630 system_pods.go:89] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:21:22.660456   30630 system_pods.go:89] "kube-scheduler-ha-994751-m03" [f53fda60-a075-4f78-a64b-52e960a4b28b] Running
	I1004 03:21:22.660465   30630 system_pods.go:89] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:21:22.660470   30630 system_pods.go:89] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:21:22.660473   30630 system_pods.go:89] "kube-vip-ha-994751-m03" [9ec22347-f3d6-419e-867a-0de177976203] Running
	I1004 03:21:22.660476   30630 system_pods.go:89] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:21:22.660481   30630 system_pods.go:126] duration metric: took 210.876444ms to wait for k8s-apps to be running ...
	I1004 03:21:22.660493   30630 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 03:21:22.660540   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:21:22.675933   30630 system_svc.go:56] duration metric: took 15.434198ms WaitForService to wait for kubelet
	I1004 03:21:22.675957   30630 kubeadm.go:582] duration metric: took 24.146164676s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:21:22.675972   30630 node_conditions.go:102] verifying NodePressure condition ...
	I1004 03:21:22.845860   30630 request.go:632] Waited for 169.820621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes
	I1004 03:21:22.845932   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes
	I1004 03:21:22.845941   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.845948   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.845959   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.850058   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:22.851493   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:21:22.851511   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:21:22.851521   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:21:22.851525   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:21:22.851529   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:21:22.851534   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:21:22.851538   30630 node_conditions.go:105] duration metric: took 175.561582ms to run NodePressure ...
	I1004 03:21:22.851551   30630 start.go:241] waiting for startup goroutines ...
	I1004 03:21:22.851569   30630 start.go:255] writing updated cluster config ...
	I1004 03:21:22.851861   30630 ssh_runner.go:195] Run: rm -f paused
	I1004 03:21:22.904494   30630 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 03:21:22.906685   30630 out.go:177] * Done! kubectl is now configured to use "ha-994751" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.274591811Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=390e47f3-b619-448b-8a25-dec3b74dbe84 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.276398479Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04717de0-33e8-4698-8d97-bdb0c3487dce name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.277101640Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012305277063634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04717de0-33e8-4698-8d97-bdb0c3487dce name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.277760954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=852b8e39-51f6-4686-807f-006a5e33e81d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.277820163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=852b8e39-51f6-4686-807f-006a5e33e81d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.278159764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=852b8e39-51f6-4686-807f-006a5e33e81d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.290351648Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=941e1bf0-7e40-443e-994e-f76ca1052b86 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.290706031Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-vh5j6,Uid:1e13c9e5-3c5b-47b9-8f41-391304b4184c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728012084122158637,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T03:21:23.807271406Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cc60903f-91b9-4e59-92ab-9f16c09d38d2,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1728011946640149577,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-04T03:19:06.314614114Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-zgdck,Uid:dcd6ed49-8491-4eb0-9863-b498c76ec3c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011946639081079,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T03:19:06.316385216Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-l6zst,Uid:554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1728011946615050433,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T03:19:06.307604522Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&PodSandboxMetadata{Name:kindnet-2mhh2,Uid:442d5ad9-dc9c-4a07-90b3-549591f9d2f1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011934078857830,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-10-04T03:18:52.255087227Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&PodSandboxMetadata{Name:kube-proxy-f44b9,Uid:e3e1a917-0150-4608-b5f3-b590d330d2ce,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011934041548754,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T03:18:52.233691775Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-994751,Uid:d09d862da2ecf4fa4a0cc55773908218,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1728011921072544695,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d09d862da2ecf4fa4a0cc55773908218,kubernetes.io/config.seen: 2024-10-04T03:18:40.378105659Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-994751,Uid:940a4ffe37e8a399065ce324e2a3e96a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011921066325762,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{kubernetes.io/config.hash: 940a
4ffe37e8a399065ce324e2a3e96a,kubernetes.io/config.seen: 2024-10-04T03:18:40.378106459Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-994751,Uid:c779652e8162a5324e798545569be164,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011921058626396,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c779652e8162a5324e798545569be164,kubernetes.io/config.seen: 2024-10-04T03:18:40.378104500Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-994751,Ui
d:ca68d6f5cb32227962ccd27f257d0736,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011921056535594,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.65:8443,kubernetes.io/config.hash: ca68d6f5cb32227962ccd27f257d0736,kubernetes.io/config.seen: 2024-10-04T03:18:40.378102927Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&PodSandboxMetadata{Name:etcd-ha-994751,Uid:15f64e9e1b892e5a5392a0aa1691bb56,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011921055240968,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-994751,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.65:2379,kubernetes.io/config.hash: 15f64e9e1b892e5a5392a0aa1691bb56,kubernetes.io/config.seen: 2024-10-04T03:18:40.378098560Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=941e1bf0-7e40-443e-994e-f76ca1052b86 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.291710483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33874a16-7610-4102-abd2-9112c98c7793 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.291807845Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33874a16-7610-4102-abd2-9112c98c7793 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.292181744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33874a16-7610-4102-abd2-9112c98c7793 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.330227309Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9bcdd67-de37-437d-b31e-eb297a518997 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.330302660Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9bcdd67-de37-437d-b31e-eb297a518997 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.331889685Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ecdd61a8-28f6-4da5-b825-e208bf1848b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.332375060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012305332349882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecdd61a8-28f6-4da5-b825-e208bf1848b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.333185498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=093f5528-eb96-4717-9d5e-bdf36e1b9b3d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.333243291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=093f5528-eb96-4717-9d5e-bdf36e1b9b3d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.333482397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=093f5528-eb96-4717-9d5e-bdf36e1b9b3d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.381329725Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16a75ea8-e278-4168-ba58-c2c38b8deb35 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.381432656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16a75ea8-e278-4168-ba58-c2c38b8deb35 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.385295125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1f18180-a1d7-4d2d-bcc3-4e5e0836b3f1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.385788488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012305385757974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1f18180-a1d7-4d2d-bcc3-4e5e0836b3f1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.386529734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22b8b323-05f0-4687-a8f9-de775bad94aa name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.386646754Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22b8b323-05f0-4687-a8f9-de775bad94aa name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:05 ha-994751 crio[664]: time="2024-10-04 03:25:05.386897434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22b8b323-05f0-4687-a8f9-de775bad94aa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9dd8849f48bb1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   21e8386b77b62       busybox-7dff88458-vh5j6
	2fe1e8ec5dfe4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   dab235bc541ca       storage-provisioner
	eb082a979b36c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   be9b34d6ca0bf       coredns-7c65d6cfc9-zgdck
	93aa8fd39f9c0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   d9a5ca3b325fa       coredns-7c65d6cfc9-l6zst
	6a3f40105608f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   454652c11f4fe       kindnet-2mhh2
	731622c5caa6f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   44f2b282edd57       kube-proxy-f44b9
	8830f0c28d759       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   5461b35eef9c3       kube-vip-ha-994751
	e49d081b73667       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   0372e9d489f05       kube-scheduler-ha-994751
	f5568cb7839e2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   c61920ab308f6       etcd-ha-994751
	849282c506754       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   6d7ea048eea90       kube-apiserver-ha-994751
	f041d718c872f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   8c1c0f1b1a430       kube-controller-manager-ha-994751
	
	
	==> coredns [93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd] <==
	[INFO] 10.244.2.2:42178 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.010745169s
	[INFO] 10.244.2.2:34829 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009009564s
	[INFO] 10.244.0.4:43910 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001485572s
	[INFO] 10.244.1.2:45378 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000181404s
	[INFO] 10.244.1.2:40886 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001942971s
	[INFO] 10.244.2.2:45461 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217787s
	[INFO] 10.244.2.2:56545 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000167289s
	[INFO] 10.244.2.2:52063 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000246892s
	[INFO] 10.244.0.4:48765 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150103s
	[INFO] 10.244.1.2:53871 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168625s
	[INFO] 10.244.1.2:58325 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001736755s
	[INFO] 10.244.1.2:38700 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085818s
	[INFO] 10.244.2.2:53525 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016163s
	[INFO] 10.244.2.2:55339 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126355s
	[INFO] 10.244.0.4:33506 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176834s
	[INFO] 10.244.0.4:47714 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136674s
	[INFO] 10.244.0.4:49593 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139876s
	[INFO] 10.244.1.2:51243 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137889s
	[INFO] 10.244.2.2:56043 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000221873s
	[INFO] 10.244.2.2:35783 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138959s
	[INFO] 10.244.0.4:37503 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013937s
	[INFO] 10.244.0.4:46310 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132408s
	[INFO] 10.244.0.4:35014 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074557s
	[INFO] 10.244.1.2:51803 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153481s
	[INFO] 10.244.1.2:47758 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000198394s
	
	
	==> coredns [eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586] <==
	[INFO] 10.244.2.2:43924 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.01283325s
	[INFO] 10.244.2.2:35798 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148903s
	[INFO] 10.244.0.4:59562 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140549s
	[INFO] 10.244.0.4:41362 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002209213s
	[INFO] 10.244.0.4:41786 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133758s
	[INFO] 10.244.0.4:49269 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001539557s
	[INFO] 10.244.0.4:56941 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018736s
	[INFO] 10.244.0.4:47984 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173422s
	[INFO] 10.244.0.4:41970 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061431s
	[INFO] 10.244.1.2:32918 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119893s
	[INFO] 10.244.1.2:39792 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093113s
	[INFO] 10.244.1.2:41331 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001259323s
	[INFO] 10.244.1.2:45464 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106483s
	[INFO] 10.244.1.2:35852 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153198s
	[INFO] 10.244.2.2:38240 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114031s
	[INFO] 10.244.2.2:54004 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008059s
	[INFO] 10.244.0.4:39542 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092418s
	[INFO] 10.244.1.2:41262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166812s
	[INFO] 10.244.1.2:55889 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146278s
	[INFO] 10.244.1.2:35654 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131643s
	[INFO] 10.244.2.2:37029 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012813s
	[INFO] 10.244.2.2:33774 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000223324s
	[INFO] 10.244.0.4:38911 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138291s
	[INFO] 10.244.1.2:56619 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093621s
	[INFO] 10.244.1.2:33622 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154511s
	
	
	==> describe nodes <==
	Name:               ha-994751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T03_18_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:18:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:18:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:18:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:18:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:19:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    ha-994751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7452b105a68246eeb61757acefd7f693
	  System UUID:                7452b105-a682-46ee-b617-57acefd7f693
	  Boot ID:                    aecf415c-e5c2-46a9-81d5-d95311218d51
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vh5j6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 coredns-7c65d6cfc9-l6zst             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m13s
	  kube-system                 coredns-7c65d6cfc9-zgdck             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m13s
	  kube-system                 etcd-ha-994751                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m18s
	  kube-system                 kindnet-2mhh2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m13s
	  kube-system                 kube-apiserver-ha-994751             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-controller-manager-ha-994751    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-proxy-f44b9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-scheduler-ha-994751             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-vip-ha-994751                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m11s  kube-proxy       
	  Normal  Starting                 6m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m18s  kubelet          Node ha-994751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s  kubelet          Node ha-994751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s  kubelet          Node ha-994751 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m14s  node-controller  Node ha-994751 event: Registered Node ha-994751 in Controller
	  Normal  NodeReady                5m59s  kubelet          Node ha-994751 status is now: NodeReady
	  Normal  RegisteredNode           5m18s  node-controller  Node ha-994751 event: Registered Node ha-994751 in Controller
	  Normal  RegisteredNode           4m2s   node-controller  Node ha-994751 event: Registered Node ha-994751 in Controller
	
	
	Name:               ha-994751-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_04T03_19_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:19:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:22:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    ha-994751-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6683e6a9e1244f787f84f2a5c1bf490
	  System UUID:                f6683e6a-9e12-44f7-87f8-4f2a5c1bf490
	  Boot ID:                    8b02ddc0-820d-4de5-b649-7e2202f66ea5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wc5kg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 etcd-ha-994751-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m24s
	  kube-system                 kindnet-rmcvt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m26s
	  kube-system                 kube-apiserver-ha-994751-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-controller-manager-ha-994751-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-proxy-ph6cf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-scheduler-ha-994751-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-vip-ha-994751-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m26s (x8 over 5m26s)  kubelet          Node ha-994751-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s (x8 over 5m26s)  kubelet          Node ha-994751-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s (x7 over 5m26s)  kubelet          Node ha-994751-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-994751-m02 event: Registered Node ha-994751-m02 in Controller
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-994751-m02 event: Registered Node ha-994751-m02 in Controller
	  Normal  RegisteredNode           4m2s                   node-controller  Node ha-994751-m02 event: Registered Node ha-994751-m02 in Controller
	  Normal  NodeNotReady             101s                   node-controller  Node ha-994751-m02 status is now: NodeNotReady
	
	
	Name:               ha-994751-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_04T03_20_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:20:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:20:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:20:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:20:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:21:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-994751-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 df18b27d8a2e4c8893a601b97ec7e8e0
	  System UUID:                df18b27d-8a2e-4c88-93a6-01b97ec7e8e0
	  Boot ID:                    138aa962-c7a2-47ea-82c1-2a5ccfbc3de0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nrdqk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 etcd-ha-994751-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m9s
	  kube-system                 kindnet-clt5p                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m11s
	  kube-system                 kube-apiserver-ha-994751-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-controller-manager-ha-994751-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-proxy-9q6q2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-scheduler-ha-994751-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-vip-ha-994751-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m11s)  kubelet          Node ha-994751-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m11s)  kubelet          Node ha-994751-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m11s)  kubelet          Node ha-994751-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-994751-m03 event: Registered Node ha-994751-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-994751-m03 event: Registered Node ha-994751-m03 in Controller
	  Normal  RegisteredNode           4m2s                   node-controller  Node ha-994751-m03 event: Registered Node ha-994751-m03 in Controller
	
	
	Name:               ha-994751-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_04T03_22_03_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:22:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:24:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-994751-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d61802e745d4414c8e0a1c3e5c1319f7
	  System UUID:                d61802e7-45d4-414c-8e0a-1c3e5c1319f7
	  Boot ID:                    f154d01f-d315-40b5-84e6-0d0b669735cf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-sggz9       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-xsz4w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-994751-m04 event: Registered Node ha-994751-m04 in Controller
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-994751-m04 event: Registered Node ha-994751-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m3s)  kubelet          Node ha-994751-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m3s)  kubelet          Node ha-994751-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m3s)  kubelet          Node ha-994751-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-994751-m04 event: Registered Node ha-994751-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-994751-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 4 03:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050646] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040240] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.800548] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.470270] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.581508] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.982603] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.059297] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061306] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.198058] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.129574] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.276832] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.888308] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +3.806908] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.054958] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.117103] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.085956] kauditd_printk_skb: 79 callbacks suppressed
	[  +7.063470] kauditd_printk_skb: 21 callbacks suppressed
	[Oct 4 03:19] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.285701] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec] <==
	{"level":"warn","ts":"2024-10-04T03:25:05.535016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.574317Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.665418Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.674079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.674413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.677610Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.682316Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.699696Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.709794Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.718212Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.724210Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.729557Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.740171Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.748691Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.774338Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.778446Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.785859Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.791311Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.798417Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.804135Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.808236Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.818455Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.828030Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.843098Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:05.873821Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 03:25:05 up 6 min,  0 users,  load average: 0.14, 0.16, 0.09
	Linux ha-994751 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99] <==
	I1004 03:24:25.998029       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	I1004 03:24:35.996235       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1004 03:24:35.996305       1 main.go:299] handling current node
	I1004 03:24:35.996325       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I1004 03:24:35.996331       1 main.go:322] Node ha-994751-m02 has CIDR [10.244.1.0/24] 
	I1004 03:24:35.996493       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1004 03:24:35.996518       1 main.go:322] Node ha-994751-m03 has CIDR [10.244.2.0/24] 
	I1004 03:24:35.996564       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I1004 03:24:35.996569       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	I1004 03:24:45.999760       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1004 03:24:45.999899       1 main.go:299] handling current node
	I1004 03:24:46.000028       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I1004 03:24:46.000107       1 main.go:322] Node ha-994751-m02 has CIDR [10.244.1.0/24] 
	I1004 03:24:46.000367       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1004 03:24:46.000422       1 main.go:322] Node ha-994751-m03 has CIDR [10.244.2.0/24] 
	I1004 03:24:46.000525       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I1004 03:24:46.000568       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	I1004 03:24:55.996427       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1004 03:24:55.996581       1 main.go:299] handling current node
	I1004 03:24:55.996609       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I1004 03:24:55.996628       1 main.go:322] Node ha-994751-m02 has CIDR [10.244.1.0/24] 
	I1004 03:24:55.996891       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1004 03:24:55.997045       1 main.go:322] Node ha-994751-m03 has CIDR [10.244.2.0/24] 
	I1004 03:24:55.997190       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I1004 03:24:55.997280       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe] <==
	I1004 03:18:46.533293       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:18:46.536324       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 03:18:46.567509       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.65]
	I1004 03:18:46.569728       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:18:46.579199       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:18:47.324394       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:18:47.342483       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 03:18:47.354293       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:18:52.030260       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:18:52.131882       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1004 03:21:29.605335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53690: use of closed network connection
	E1004 03:21:29.795618       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53702: use of closed network connection
	E1004 03:21:29.974284       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53722: use of closed network connection
	E1004 03:21:30.184885       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53734: use of closed network connection
	E1004 03:21:30.399362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53748: use of closed network connection
	E1004 03:21:30.586499       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53770: use of closed network connection
	E1004 03:21:30.773657       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53776: use of closed network connection
	E1004 03:21:30.946921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53796: use of closed network connection
	E1004 03:21:31.140751       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53812: use of closed network connection
	E1004 03:21:31.439406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53848: use of closed network connection
	E1004 03:21:31.610289       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53874: use of closed network connection
	E1004 03:21:31.791527       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53896: use of closed network connection
	E1004 03:21:31.973829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53924: use of closed network connection
	E1004 03:21:32.157183       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53938: use of closed network connection
	E1004 03:21:32.326553       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53952: use of closed network connection
	
	
	==> kube-controller-manager [f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8] <==
	I1004 03:22:03.059069       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-994751-m04" podCIDRs=["10.244.3.0/24"]
	I1004 03:22:03.059118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.061876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.076574       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.137039       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.276697       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.662795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.977537       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:04.044472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:06.344839       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:06.345923       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-994751-m04"
	I1004 03:22:06.383881       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:13.412719       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:24.487665       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-994751-m04"
	I1004 03:22:24.487754       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:24.502742       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:26.362397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:33.863379       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:23:24.007837       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-994751-m04"
	I1004 03:23:24.008551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	I1004 03:23:24.038687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	I1004 03:23:24.187288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.759379ms"
	I1004 03:23:24.187415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.69µs"
	I1004 03:23:26.454826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	I1004 03:23:29.201808       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	
	
	==> kube-proxy [731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:18:54.520708       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:18:54.543515       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.65"]
	E1004 03:18:54.543642       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:18:54.585531       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:18:54.585592       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:18:54.585623       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:18:54.595069       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:18:54.598246       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:18:54.598343       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:18:54.602801       1 config.go:199] "Starting service config controller"
	I1004 03:18:54.603172       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:18:54.603521       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:18:54.603587       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:18:54.607605       1 config.go:328] "Starting node config controller"
	I1004 03:18:54.607621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:18:54.704654       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:18:54.704732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:18:54.707708       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec] <==
	W1004 03:18:45.760588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:18:45.760709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:18:45.902575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:18:45.902704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:18:45.937221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:18:45.937512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:18:46.030883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 03:18:46.031049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1004 03:18:48.095287       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1004 03:22:03.109132       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zh45q\": pod kindnet-zh45q is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zh45q" node="ha-994751-m04"
	E1004 03:22:03.113875       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cc0c3789-7dca-4ede-a355-9ac6d9db68c2(kube-system/kindnet-zh45q) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-zh45q"
	E1004 03:22:03.114052       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zh45q\": pod kindnet-zh45q is already assigned to node \"ha-994751-m04\"" pod="kube-system/kindnet-zh45q"
	I1004 03:22:03.114143       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zh45q" node="ha-994751-m04"
	E1004 03:22:03.121368       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xsz4w\": pod kube-proxy-xsz4w is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xsz4w" node="ha-994751-m04"
	E1004 03:22:03.121569       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4f6e672a-e80b-4f45-b3a5-98dfa1ebaad3(kube-system/kube-proxy-xsz4w) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xsz4w"
	E1004 03:22:03.121624       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xsz4w\": pod kube-proxy-xsz4w is already assigned to node \"ha-994751-m04\"" pod="kube-system/kube-proxy-xsz4w"
	I1004 03:22:03.121686       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xsz4w" node="ha-994751-m04"
	E1004 03:22:03.177157       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zbb9z\": pod kube-proxy-zbb9z is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zbb9z" node="ha-994751-m04"
	E1004 03:22:03.177330       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a7948b15-0522-4cbd-8803-8c139b2e791a(kube-system/kube-proxy-zbb9z) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-zbb9z"
	E1004 03:22:03.177379       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zbb9z\": pod kube-proxy-zbb9z is already assigned to node \"ha-994751-m04\"" pod="kube-system/kube-proxy-zbb9z"
	I1004 03:22:03.177445       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zbb9z" node="ha-994751-m04"
	E1004 03:22:03.177921       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qfb5r\": pod kindnet-qfb5r is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qfb5r" node="ha-994751-m04"
	E1004 03:22:03.181030       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 085d0454-1ccc-408e-ae12-366c29ab0a15(kube-system/kindnet-qfb5r) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qfb5r"
	E1004 03:22:03.181113       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qfb5r\": pod kindnet-qfb5r is already assigned to node \"ha-994751-m04\"" pod="kube-system/kindnet-qfb5r"
	I1004 03:22:03.181162       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qfb5r" node="ha-994751-m04"
	
	
	==> kubelet <==
	Oct 04 03:23:47 ha-994751 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:23:47 ha-994751 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:23:47 ha-994751 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:23:47 ha-994751 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:23:47 ha-994751 kubelet[1305]: E1004 03:23:47.373529    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012227373073617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:23:47 ha-994751 kubelet[1305]: E1004 03:23:47.373558    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012227373073617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:23:57 ha-994751 kubelet[1305]: E1004 03:23:57.376221    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012237375404117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:23:57 ha-994751 kubelet[1305]: E1004 03:23:57.376607    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012237375404117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:07 ha-994751 kubelet[1305]: E1004 03:24:07.379453    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012247378682972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:07 ha-994751 kubelet[1305]: E1004 03:24:07.379509    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012247378682972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:17 ha-994751 kubelet[1305]: E1004 03:24:17.381784    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012257381348480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:17 ha-994751 kubelet[1305]: E1004 03:24:17.382305    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012257381348480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:27 ha-994751 kubelet[1305]: E1004 03:24:27.387309    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012267384211934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:27 ha-994751 kubelet[1305]: E1004 03:24:27.387674    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012267384211934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:37 ha-994751 kubelet[1305]: E1004 03:24:37.389662    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012277389023499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:37 ha-994751 kubelet[1305]: E1004 03:24:37.390147    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012277389023499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:47 ha-994751 kubelet[1305]: E1004 03:24:47.337368    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:24:47 ha-994751 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:24:47 ha-994751 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:24:47 ha-994751 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:24:47 ha-994751 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:24:47 ha-994751 kubelet[1305]: E1004 03:24:47.393080    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012287392471580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:47 ha-994751 kubelet[1305]: E1004 03:24:47.393113    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012287392471580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:57 ha-994751 kubelet[1305]: E1004 03:24:57.395248    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012297394773017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:57 ha-994751 kubelet[1305]: E1004 03:24:57.395590    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012297394773017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-994751 -n ha-994751
helpers_test.go:261: (dbg) Run:  kubectl --context ha-994751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.397193541s)
ha_test.go:415: expected profile "ha-994751" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-994751\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-994751\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-994751\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.65\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.117\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.53\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.134\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\
"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\
":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-994751 -n ha-994751
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-994751 logs -n 25: (1.44928606s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1363640037/001/cp-test_ha-994751-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751:/home/docker/cp-test_ha-994751-m03_ha-994751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751 sudo cat                                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m02:/home/docker/cp-test_ha-994751-m03_ha-994751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m02 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04:/home/docker/cp-test_ha-994751-m03_ha-994751-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m04 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp testdata/cp-test.txt                                                | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1363640037/001/cp-test_ha-994751-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751:/home/docker/cp-test_ha-994751-m04_ha-994751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751 sudo cat                                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m02:/home/docker/cp-test_ha-994751-m04_ha-994751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m02 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03:/home/docker/cp-test_ha-994751-m04_ha-994751-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m03 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-994751 node stop m02 -v=7                                                     | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 03:18:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 03:18:05.722757   30630 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:18:05.722861   30630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:18:05.722866   30630 out.go:358] Setting ErrFile to fd 2...
	I1004 03:18:05.722871   30630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:18:05.723051   30630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 03:18:05.723672   30630 out.go:352] Setting JSON to false
	I1004 03:18:05.724646   30630 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3631,"bootTime":1728008255,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 03:18:05.724743   30630 start.go:139] virtualization: kvm guest
	I1004 03:18:05.726903   30630 out.go:177] * [ha-994751] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 03:18:05.728435   30630 notify.go:220] Checking for updates...
	I1004 03:18:05.728459   30630 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:18:05.730163   30630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:18:05.731580   30630 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:18:05.733048   30630 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:05.734449   30630 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 03:18:05.735914   30630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:18:05.737675   30630 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:18:05.774405   30630 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 03:18:05.775959   30630 start.go:297] selected driver: kvm2
	I1004 03:18:05.775980   30630 start.go:901] validating driver "kvm2" against <nil>
	I1004 03:18:05.775993   30630 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:18:05.776759   30630 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:18:05.776855   30630 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 03:18:05.791915   30630 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 03:18:05.791974   30630 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 03:18:05.792218   30630 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:18:05.792245   30630 cni.go:84] Creating CNI manager for ""
	I1004 03:18:05.792281   30630 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1004 03:18:05.792289   30630 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 03:18:05.792342   30630 start.go:340] cluster config:
	{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1004 03:18:05.792429   30630 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:18:05.794321   30630 out.go:177] * Starting "ha-994751" primary control-plane node in "ha-994751" cluster
	I1004 03:18:05.795797   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:18:05.795855   30630 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 03:18:05.795867   30630 cache.go:56] Caching tarball of preloaded images
	I1004 03:18:05.795948   30630 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:18:05.795958   30630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:18:05.796250   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:05.796278   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json: {Name:mk8f786fa93ab6935652e46df2caeb1892ffd1fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:05.796426   30630 start.go:360] acquireMachinesLock for ha-994751: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:18:05.796455   30630 start.go:364] duration metric: took 15.921µs to acquireMachinesLock for "ha-994751"
	I1004 03:18:05.796470   30630 start.go:93] Provisioning new machine with config: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:18:05.796525   30630 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 03:18:05.798287   30630 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 03:18:05.798440   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:05.798475   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:05.812686   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34047
	I1004 03:18:05.813143   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:05.813678   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:05.813709   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:05.814066   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:05.814254   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:05.814407   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:05.814549   30630 start.go:159] libmachine.API.Create for "ha-994751" (driver="kvm2")
	I1004 03:18:05.814572   30630 client.go:168] LocalClient.Create starting
	I1004 03:18:05.814612   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 03:18:05.814645   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:05.814661   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:05.814721   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 03:18:05.814738   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:05.814750   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:05.814764   30630 main.go:141] libmachine: Running pre-create checks...
	I1004 03:18:05.814779   30630 main.go:141] libmachine: (ha-994751) Calling .PreCreateCheck
	I1004 03:18:05.815056   30630 main.go:141] libmachine: (ha-994751) Calling .GetConfigRaw
	I1004 03:18:05.815402   30630 main.go:141] libmachine: Creating machine...
	I1004 03:18:05.815413   30630 main.go:141] libmachine: (ha-994751) Calling .Create
	I1004 03:18:05.815566   30630 main.go:141] libmachine: (ha-994751) Creating KVM machine...
	I1004 03:18:05.816861   30630 main.go:141] libmachine: (ha-994751) DBG | found existing default KVM network
	I1004 03:18:05.817536   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:05.817406   30653 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1004 03:18:05.817563   30630 main.go:141] libmachine: (ha-994751) DBG | created network xml: 
	I1004 03:18:05.817586   30630 main.go:141] libmachine: (ha-994751) DBG | <network>
	I1004 03:18:05.817592   30630 main.go:141] libmachine: (ha-994751) DBG |   <name>mk-ha-994751</name>
	I1004 03:18:05.817597   30630 main.go:141] libmachine: (ha-994751) DBG |   <dns enable='no'/>
	I1004 03:18:05.817602   30630 main.go:141] libmachine: (ha-994751) DBG |   
	I1004 03:18:05.817610   30630 main.go:141] libmachine: (ha-994751) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1004 03:18:05.817615   30630 main.go:141] libmachine: (ha-994751) DBG |     <dhcp>
	I1004 03:18:05.817621   30630 main.go:141] libmachine: (ha-994751) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1004 03:18:05.817629   30630 main.go:141] libmachine: (ha-994751) DBG |     </dhcp>
	I1004 03:18:05.817644   30630 main.go:141] libmachine: (ha-994751) DBG |   </ip>
	I1004 03:18:05.817652   30630 main.go:141] libmachine: (ha-994751) DBG |   
	I1004 03:18:05.817659   30630 main.go:141] libmachine: (ha-994751) DBG | </network>
	I1004 03:18:05.817668   30630 main.go:141] libmachine: (ha-994751) DBG | 
	I1004 03:18:05.823178   30630 main.go:141] libmachine: (ha-994751) DBG | trying to create private KVM network mk-ha-994751 192.168.39.0/24...
	I1004 03:18:05.886885   30630 main.go:141] libmachine: (ha-994751) DBG | private KVM network mk-ha-994751 192.168.39.0/24 created
	I1004 03:18:05.886925   30630 main.go:141] libmachine: (ha-994751) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751 ...
	I1004 03:18:05.886940   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:05.886875   30653 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:05.886958   30630 main.go:141] libmachine: (ha-994751) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 03:18:05.887024   30630 main.go:141] libmachine: (ha-994751) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 03:18:06.142449   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:06.142299   30653 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa...
	I1004 03:18:06.210635   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:06.210526   30653 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/ha-994751.rawdisk...
	I1004 03:18:06.210664   30630 main.go:141] libmachine: (ha-994751) DBG | Writing magic tar header
	I1004 03:18:06.210677   30630 main.go:141] libmachine: (ha-994751) DBG | Writing SSH key tar header
	I1004 03:18:06.210688   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:06.210638   30653 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751 ...
	I1004 03:18:06.210755   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751
	I1004 03:18:06.210796   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751 (perms=drwx------)
	I1004 03:18:06.210813   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 03:18:06.210829   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:06.210837   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 03:18:06.210844   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 03:18:06.210850   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins
	I1004 03:18:06.210857   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 03:18:06.210924   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 03:18:06.210944   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 03:18:06.210949   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home
	I1004 03:18:06.210957   30630 main.go:141] libmachine: (ha-994751) DBG | Skipping /home - not owner
	I1004 03:18:06.210976   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 03:18:06.210990   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 03:18:06.210999   30630 main.go:141] libmachine: (ha-994751) Creating domain...
	I1004 03:18:06.212079   30630 main.go:141] libmachine: (ha-994751) define libvirt domain using xml: 
	I1004 03:18:06.212103   30630 main.go:141] libmachine: (ha-994751) <domain type='kvm'>
	I1004 03:18:06.212112   30630 main.go:141] libmachine: (ha-994751)   <name>ha-994751</name>
	I1004 03:18:06.212118   30630 main.go:141] libmachine: (ha-994751)   <memory unit='MiB'>2200</memory>
	I1004 03:18:06.212126   30630 main.go:141] libmachine: (ha-994751)   <vcpu>2</vcpu>
	I1004 03:18:06.212132   30630 main.go:141] libmachine: (ha-994751)   <features>
	I1004 03:18:06.212140   30630 main.go:141] libmachine: (ha-994751)     <acpi/>
	I1004 03:18:06.212152   30630 main.go:141] libmachine: (ha-994751)     <apic/>
	I1004 03:18:06.212164   30630 main.go:141] libmachine: (ha-994751)     <pae/>
	I1004 03:18:06.212177   30630 main.go:141] libmachine: (ha-994751)     
	I1004 03:18:06.212187   30630 main.go:141] libmachine: (ha-994751)   </features>
	I1004 03:18:06.212192   30630 main.go:141] libmachine: (ha-994751)   <cpu mode='host-passthrough'>
	I1004 03:18:06.212196   30630 main.go:141] libmachine: (ha-994751)   
	I1004 03:18:06.212200   30630 main.go:141] libmachine: (ha-994751)   </cpu>
	I1004 03:18:06.212204   30630 main.go:141] libmachine: (ha-994751)   <os>
	I1004 03:18:06.212210   30630 main.go:141] libmachine: (ha-994751)     <type>hvm</type>
	I1004 03:18:06.212215   30630 main.go:141] libmachine: (ha-994751)     <boot dev='cdrom'/>
	I1004 03:18:06.212228   30630 main.go:141] libmachine: (ha-994751)     <boot dev='hd'/>
	I1004 03:18:06.212253   30630 main.go:141] libmachine: (ha-994751)     <bootmenu enable='no'/>
	I1004 03:18:06.212268   30630 main.go:141] libmachine: (ha-994751)   </os>
	I1004 03:18:06.212286   30630 main.go:141] libmachine: (ha-994751)   <devices>
	I1004 03:18:06.212296   30630 main.go:141] libmachine: (ha-994751)     <disk type='file' device='cdrom'>
	I1004 03:18:06.212309   30630 main.go:141] libmachine: (ha-994751)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/boot2docker.iso'/>
	I1004 03:18:06.212319   30630 main.go:141] libmachine: (ha-994751)       <target dev='hdc' bus='scsi'/>
	I1004 03:18:06.212330   30630 main.go:141] libmachine: (ha-994751)       <readonly/>
	I1004 03:18:06.212334   30630 main.go:141] libmachine: (ha-994751)     </disk>
	I1004 03:18:06.212342   30630 main.go:141] libmachine: (ha-994751)     <disk type='file' device='disk'>
	I1004 03:18:06.212354   30630 main.go:141] libmachine: (ha-994751)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 03:18:06.212370   30630 main.go:141] libmachine: (ha-994751)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/ha-994751.rawdisk'/>
	I1004 03:18:06.212380   30630 main.go:141] libmachine: (ha-994751)       <target dev='hda' bus='virtio'/>
	I1004 03:18:06.212388   30630 main.go:141] libmachine: (ha-994751)     </disk>
	I1004 03:18:06.212397   30630 main.go:141] libmachine: (ha-994751)     <interface type='network'>
	I1004 03:18:06.212406   30630 main.go:141] libmachine: (ha-994751)       <source network='mk-ha-994751'/>
	I1004 03:18:06.212415   30630 main.go:141] libmachine: (ha-994751)       <model type='virtio'/>
	I1004 03:18:06.212440   30630 main.go:141] libmachine: (ha-994751)     </interface>
	I1004 03:18:06.212460   30630 main.go:141] libmachine: (ha-994751)     <interface type='network'>
	I1004 03:18:06.212467   30630 main.go:141] libmachine: (ha-994751)       <source network='default'/>
	I1004 03:18:06.212471   30630 main.go:141] libmachine: (ha-994751)       <model type='virtio'/>
	I1004 03:18:06.212479   30630 main.go:141] libmachine: (ha-994751)     </interface>
	I1004 03:18:06.212494   30630 main.go:141] libmachine: (ha-994751)     <serial type='pty'>
	I1004 03:18:06.212502   30630 main.go:141] libmachine: (ha-994751)       <target port='0'/>
	I1004 03:18:06.212508   30630 main.go:141] libmachine: (ha-994751)     </serial>
	I1004 03:18:06.212516   30630 main.go:141] libmachine: (ha-994751)     <console type='pty'>
	I1004 03:18:06.212520   30630 main.go:141] libmachine: (ha-994751)       <target type='serial' port='0'/>
	I1004 03:18:06.212542   30630 main.go:141] libmachine: (ha-994751)     </console>
	I1004 03:18:06.212560   30630 main.go:141] libmachine: (ha-994751)     <rng model='virtio'>
	I1004 03:18:06.212574   30630 main.go:141] libmachine: (ha-994751)       <backend model='random'>/dev/random</backend>
	I1004 03:18:06.212585   30630 main.go:141] libmachine: (ha-994751)     </rng>
	I1004 03:18:06.212593   30630 main.go:141] libmachine: (ha-994751)     
	I1004 03:18:06.212602   30630 main.go:141] libmachine: (ha-994751)     
	I1004 03:18:06.212610   30630 main.go:141] libmachine: (ha-994751)   </devices>
	I1004 03:18:06.212618   30630 main.go:141] libmachine: (ha-994751) </domain>
	I1004 03:18:06.212627   30630 main.go:141] libmachine: (ha-994751) 
	I1004 03:18:06.216801   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:e9:7d:48 in network default
	I1004 03:18:06.217289   30630 main.go:141] libmachine: (ha-994751) Ensuring networks are active...
	I1004 03:18:06.217308   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:06.217978   30630 main.go:141] libmachine: (ha-994751) Ensuring network default is active
	I1004 03:18:06.218330   30630 main.go:141] libmachine: (ha-994751) Ensuring network mk-ha-994751 is active
	I1004 03:18:06.218792   30630 main.go:141] libmachine: (ha-994751) Getting domain xml...
	I1004 03:18:06.219458   30630 main.go:141] libmachine: (ha-994751) Creating domain...
	I1004 03:18:07.407094   30630 main.go:141] libmachine: (ha-994751) Waiting to get IP...
	I1004 03:18:07.407817   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:07.408229   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:07.408273   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:07.408187   30653 retry.go:31] will retry after 265.096314ms: waiting for machine to come up
	I1004 03:18:07.674734   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:07.675129   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:07.675155   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:07.675076   30653 retry.go:31] will retry after 390.620211ms: waiting for machine to come up
	I1004 03:18:08.067622   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:08.068086   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:08.068114   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:08.068031   30653 retry.go:31] will retry after 362.909556ms: waiting for machine to come up
	I1004 03:18:08.432460   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:08.432888   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:08.432909   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:08.432822   30653 retry.go:31] will retry after 609.869022ms: waiting for machine to come up
	I1004 03:18:09.044728   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:09.045180   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:09.045206   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:09.045129   30653 retry.go:31] will retry after 721.849297ms: waiting for machine to come up
	I1004 03:18:09.769005   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:09.769517   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:09.769542   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:09.769465   30653 retry.go:31] will retry after 920.066652ms: waiting for machine to come up
	I1004 03:18:10.691477   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:10.691934   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:10.691982   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:10.691880   30653 retry.go:31] will retry after 915.375779ms: waiting for machine to come up
	I1004 03:18:11.608614   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:11.609000   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:11.609026   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:11.608956   30653 retry.go:31] will retry after 1.213056064s: waiting for machine to come up
	I1004 03:18:12.823425   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:12.823843   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:12.823863   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:12.823799   30653 retry.go:31] will retry after 1.167496597s: waiting for machine to come up
	I1004 03:18:13.993222   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:13.993651   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:13.993670   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:13.993625   30653 retry.go:31] will retry after 1.774059142s: waiting for machine to come up
	I1004 03:18:15.769014   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:15.769477   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:15.769521   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:15.769420   30653 retry.go:31] will retry after 2.081580382s: waiting for machine to come up
	I1004 03:18:17.853131   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:17.853479   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:17.853503   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:17.853441   30653 retry.go:31] will retry after 3.090115259s: waiting for machine to come up
	I1004 03:18:20.945030   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:20.945469   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:20.945493   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:20.945409   30653 retry.go:31] will retry after 4.314609333s: waiting for machine to come up
	I1004 03:18:25.264846   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:25.265316   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:25.265335   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:25.265278   30653 retry.go:31] will retry after 4.302479318s: waiting for machine to come up
	I1004 03:18:29.572575   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.572946   30630 main.go:141] libmachine: (ha-994751) Found IP for machine: 192.168.39.65
	I1004 03:18:29.572975   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has current primary IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.572983   30630 main.go:141] libmachine: (ha-994751) Reserving static IP address...
	I1004 03:18:29.573371   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find host DHCP lease matching {name: "ha-994751", mac: "52:54:00:9b:b2:a8", ip: "192.168.39.65"} in network mk-ha-994751
	I1004 03:18:29.642317   30630 main.go:141] libmachine: (ha-994751) DBG | Getting to WaitForSSH function...
	I1004 03:18:29.642344   30630 main.go:141] libmachine: (ha-994751) Reserved static IP address: 192.168.39.65
	I1004 03:18:29.642356   30630 main.go:141] libmachine: (ha-994751) Waiting for SSH to be available...
	I1004 03:18:29.644819   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.645174   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.645189   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.645350   30630 main.go:141] libmachine: (ha-994751) DBG | Using SSH client type: external
	I1004 03:18:29.645373   30630 main.go:141] libmachine: (ha-994751) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa (-rw-------)
	I1004 03:18:29.645433   30630 main.go:141] libmachine: (ha-994751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:18:29.645459   30630 main.go:141] libmachine: (ha-994751) DBG | About to run SSH command:
	I1004 03:18:29.645475   30630 main.go:141] libmachine: (ha-994751) DBG | exit 0
	I1004 03:18:29.768066   30630 main.go:141] libmachine: (ha-994751) DBG | SSH cmd err, output: <nil>: 
	I1004 03:18:29.768301   30630 main.go:141] libmachine: (ha-994751) KVM machine creation complete!
	I1004 03:18:29.768621   30630 main.go:141] libmachine: (ha-994751) Calling .GetConfigRaw
	I1004 03:18:29.769131   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:29.769285   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:29.769480   30630 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 03:18:29.769497   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:29.770831   30630 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 03:18:29.770850   30630 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 03:18:29.770858   30630 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 03:18:29.770868   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:29.772990   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.773299   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.773321   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.773460   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:29.773635   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.773787   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.773964   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:29.774099   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:29.774324   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:29.774336   30630 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 03:18:29.870824   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:18:29.870852   30630 main.go:141] libmachine: Detecting the provisioner...
	I1004 03:18:29.870864   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:29.873067   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.873430   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.873464   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.873650   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:29.873816   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.873947   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.874038   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:29.874214   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:29.874367   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:29.874377   30630 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 03:18:29.972554   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 03:18:29.972627   30630 main.go:141] libmachine: found compatible host: buildroot
	I1004 03:18:29.972634   30630 main.go:141] libmachine: Provisioning with buildroot...
	I1004 03:18:29.972640   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:29.972883   30630 buildroot.go:166] provisioning hostname "ha-994751"
	I1004 03:18:29.972906   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:29.973092   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:29.975627   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.976040   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.976059   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.976197   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:29.976336   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.976489   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.976626   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:29.976745   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:29.976951   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:29.976969   30630 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-994751 && echo "ha-994751" | sudo tee /etc/hostname
	I1004 03:18:30.090454   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751
	
	I1004 03:18:30.090480   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.094372   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.094783   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.094812   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.094993   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.095167   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.095331   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.095446   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.095586   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:30.095799   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:30.095822   30630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-994751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-994751/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-994751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:18:30.200998   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:18:30.201031   30630 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:18:30.201106   30630 buildroot.go:174] setting up certificates
	I1004 03:18:30.201120   30630 provision.go:84] configureAuth start
	I1004 03:18:30.201131   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:30.201353   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:30.203920   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.204369   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.204390   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.204563   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.206770   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.207168   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.207195   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.207325   30630 provision.go:143] copyHostCerts
	I1004 03:18:30.207355   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:18:30.207398   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:18:30.207407   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:18:30.207474   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:18:30.207553   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:18:30.207574   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:18:30.207581   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:18:30.207605   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:18:30.207644   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:18:30.207661   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:18:30.207671   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:18:30.207691   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:18:30.207739   30630 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.ha-994751 san=[127.0.0.1 192.168.39.65 ha-994751 localhost minikube]
	I1004 03:18:30.399105   30630 provision.go:177] copyRemoteCerts
	I1004 03:18:30.399156   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:18:30.399185   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.401949   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.402239   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.402273   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.402458   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.402612   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.402732   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.402824   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:30.481271   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:18:30.481342   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:18:30.505491   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:18:30.505567   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:18:30.528533   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:18:30.528602   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1004 03:18:30.551611   30630 provision.go:87] duration metric: took 350.480163ms to configureAuth
	I1004 03:18:30.551641   30630 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:18:30.551807   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:18:30.551909   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.554312   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.554641   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.554668   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.554833   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.554998   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.555138   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.555257   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.555398   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:30.555570   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:30.555585   30630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:18:30.762357   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:18:30.762381   30630 main.go:141] libmachine: Checking connection to Docker...
	I1004 03:18:30.762388   30630 main.go:141] libmachine: (ha-994751) Calling .GetURL
	I1004 03:18:30.763606   30630 main.go:141] libmachine: (ha-994751) DBG | Using libvirt version 6000000
	I1004 03:18:30.765692   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.766020   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.766048   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.766206   30630 main.go:141] libmachine: Docker is up and running!
	I1004 03:18:30.766228   30630 main.go:141] libmachine: Reticulating splines...
	I1004 03:18:30.766236   30630 client.go:171] duration metric: took 24.951657625s to LocalClient.Create
	I1004 03:18:30.766258   30630 start.go:167] duration metric: took 24.951708327s to libmachine.API.Create "ha-994751"
	I1004 03:18:30.766279   30630 start.go:293] postStartSetup for "ha-994751" (driver="kvm2")
	I1004 03:18:30.766291   30630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:18:30.766310   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.766550   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:18:30.766573   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.768581   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.768893   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.768918   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.769018   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.769215   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.769374   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.769501   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:30.850107   30630 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:18:30.854350   30630 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:18:30.854372   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:18:30.854448   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:18:30.854554   30630 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:18:30.854567   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:18:30.854687   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:18:30.863939   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:18:30.887968   30630 start.go:296] duration metric: took 121.677235ms for postStartSetup
	I1004 03:18:30.888032   30630 main.go:141] libmachine: (ha-994751) Calling .GetConfigRaw
	I1004 03:18:30.888647   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:30.891188   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.891538   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.891578   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.891766   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:30.891959   30630 start.go:128] duration metric: took 25.095424862s to createHost
	I1004 03:18:30.891980   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.894352   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.894614   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.894640   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.894753   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.894910   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.895041   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.895137   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.895264   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:30.895466   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:30.895480   30630 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:18:30.992599   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011910.970126057
	
	I1004 03:18:30.992618   30630 fix.go:216] guest clock: 1728011910.970126057
	I1004 03:18:30.992625   30630 fix.go:229] Guest: 2024-10-04 03:18:30.970126057 +0000 UTC Remote: 2024-10-04 03:18:30.89197094 +0000 UTC m=+25.204801944 (delta=78.155117ms)
	I1004 03:18:30.992662   30630 fix.go:200] guest clock delta is within tolerance: 78.155117ms
	I1004 03:18:30.992667   30630 start.go:83] releasing machines lock for "ha-994751", held for 25.19620396s
	I1004 03:18:30.992685   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.992896   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:30.995326   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.995629   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.995653   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.995813   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.996311   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.996458   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.996541   30630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:18:30.996578   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.996668   30630 ssh_runner.go:195] Run: cat /version.json
	I1004 03:18:30.996687   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.999188   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999227   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999574   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.999599   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999648   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.999673   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999727   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.999923   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.999936   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:31.000065   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:31.000137   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:31.000197   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:31.000242   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:31.000338   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:31.092724   30630 ssh_runner.go:195] Run: systemctl --version
	I1004 03:18:31.098738   30630 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:18:31.257592   30630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 03:18:31.263326   30630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:18:31.263402   30630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:18:31.278780   30630 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 03:18:31.278800   30630 start.go:495] detecting cgroup driver to use...
	I1004 03:18:31.278866   30630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:18:31.295874   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:18:31.310006   30630 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:18:31.310076   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:18:31.323189   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:18:31.336586   30630 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:18:31.452424   30630 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:18:31.611505   30630 docker.go:233] disabling docker service ...
	I1004 03:18:31.611576   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:18:31.625795   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:18:31.640666   30630 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:18:31.774429   30630 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:18:31.903530   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:18:31.917157   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:18:31.935039   30630 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:18:31.935118   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.945550   30630 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:18:31.945617   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.955961   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.966381   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.976764   30630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:18:31.987308   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.997608   30630 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:32.014334   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:32.025406   30630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:18:32.035105   30630 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 03:18:32.035157   30630 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 03:18:32.048803   30630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:18:32.058421   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:18:32.175897   30630 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:18:32.272377   30630 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:18:32.272435   30630 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:18:32.277743   30630 start.go:563] Will wait 60s for crictl version
	I1004 03:18:32.277805   30630 ssh_runner.go:195] Run: which crictl
	I1004 03:18:32.281362   30630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:18:32.318848   30630 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:18:32.318925   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:18:32.346909   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:18:32.375477   30630 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:18:32.376825   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:32.379208   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:32.379571   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:32.379594   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:32.379801   30630 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:18:32.384207   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:18:32.397053   30630 kubeadm.go:883] updating cluster {Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 03:18:32.397153   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:18:32.397223   30630 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:18:32.434648   30630 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 03:18:32.434703   30630 ssh_runner.go:195] Run: which lz4
	I1004 03:18:32.438603   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1004 03:18:32.438682   30630 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 03:18:32.442788   30630 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 03:18:32.442821   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 03:18:33.747633   30630 crio.go:462] duration metric: took 1.308983475s to copy over tarball
	I1004 03:18:33.747699   30630 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 03:18:35.713127   30630 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.965391744s)
	I1004 03:18:35.713157   30630 crio.go:469] duration metric: took 1.965495286s to extract the tarball
	I1004 03:18:35.713167   30630 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 03:18:35.749886   30630 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:18:35.795226   30630 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:18:35.795249   30630 cache_images.go:84] Images are preloaded, skipping loading
	I1004 03:18:35.795257   30630 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.31.1 crio true true} ...
	I1004 03:18:35.795346   30630 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-994751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:18:35.795408   30630 ssh_runner.go:195] Run: crio config
	I1004 03:18:35.841695   30630 cni.go:84] Creating CNI manager for ""
	I1004 03:18:35.841718   30630 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1004 03:18:35.841728   30630 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 03:18:35.841746   30630 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-994751 NodeName:ha-994751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 03:18:35.841868   30630 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-994751"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 03:18:35.841893   30630 kube-vip.go:115] generating kube-vip config ...
	I1004 03:18:35.841933   30630 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1004 03:18:35.858111   30630 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1004 03:18:35.858218   30630 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:18:35.858274   30630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:18:35.867809   30630 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 03:18:35.867872   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1004 03:18:35.876830   30630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1004 03:18:35.892172   30630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:18:35.907631   30630 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1004 03:18:35.923147   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1004 03:18:35.939242   30630 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:18:35.943241   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:18:35.955036   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:18:36.063830   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:18:36.080131   30630 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751 for IP: 192.168.39.65
	I1004 03:18:36.080153   30630 certs.go:194] generating shared ca certs ...
	I1004 03:18:36.080169   30630 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.080303   30630 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:18:36.080336   30630 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:18:36.080345   30630 certs.go:256] generating profile certs ...
	I1004 03:18:36.080388   30630 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key
	I1004 03:18:36.080414   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt with IP's: []
	I1004 03:18:36.205325   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt ...
	I1004 03:18:36.205354   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt: {Name:mk097459d54d355cf05d74a196b72b51ed16216c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.205539   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key ...
	I1004 03:18:36.205553   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key: {Name:mka6efef398570320df79b26ee2d84116b88400b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.205628   30630 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35
	I1004 03:18:36.205642   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65 192.168.39.254]
	I1004 03:18:36.278398   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35 ...
	I1004 03:18:36.278426   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35: {Name:mk5a54fedcb658e02d5a59c4cc7f959d0efc3b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.278574   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35 ...
	I1004 03:18:36.278586   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35: {Name:mk30bcb47c9e314eff3c9b6a3bb1c1b8ba019417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.278653   30630 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt
	I1004 03:18:36.278741   30630 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key
	I1004 03:18:36.278802   30630 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key
	I1004 03:18:36.278825   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt with IP's: []
	I1004 03:18:36.411462   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt ...
	I1004 03:18:36.411499   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt: {Name:mk5cbb9b0a13c8121c937d53956001313fc362d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.411652   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key ...
	I1004 03:18:36.411663   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key: {Name:mkcfa953ddb2aa55fb392dd2b0300dc4d7ed9a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.411729   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:18:36.411745   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:18:36.411758   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:18:36.411771   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:18:36.411798   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:18:36.411811   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:18:36.411823   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:18:36.411835   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:18:36.411884   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:18:36.411919   30630 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:18:36.411928   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:18:36.411953   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:18:36.411976   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:18:36.411996   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:18:36.412030   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:18:36.412053   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.412066   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.412078   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.412548   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:18:36.441146   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:18:36.468175   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:18:36.494488   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:18:36.520930   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1004 03:18:36.546306   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 03:18:36.571622   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:18:36.595650   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:18:36.619154   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:18:36.643284   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:18:36.666998   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:18:36.692308   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 03:18:36.710569   30630 ssh_runner.go:195] Run: openssl version
	I1004 03:18:36.722532   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:18:36.738971   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.743511   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.743568   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.749416   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:18:36.760315   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:18:36.771516   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.776032   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.776090   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.781784   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:18:36.792883   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:18:36.804051   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.808536   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.808596   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.814260   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:18:36.827637   30630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:18:36.833576   30630 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 03:18:36.833628   30630 kubeadm.go:392] StartCluster: {Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:18:36.833720   30630 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 03:18:36.833768   30630 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 03:18:36.890855   30630 cri.go:89] found id: ""
	I1004 03:18:36.890927   30630 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 03:18:36.902870   30630 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 03:18:36.912801   30630 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 03:18:36.922312   30630 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 03:18:36.922332   30630 kubeadm.go:157] found existing configuration files:
	
	I1004 03:18:36.922378   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 03:18:36.931373   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 03:18:36.931434   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 03:18:36.940992   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 03:18:36.949951   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 03:18:36.950008   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 03:18:36.959253   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 03:18:36.968235   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 03:18:36.968290   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 03:18:36.977554   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 03:18:36.986351   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 03:18:36.986408   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 03:18:36.995719   30630 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 03:18:37.089352   30630 kubeadm.go:310] W1004 03:18:37.073375     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 03:18:37.090411   30630 kubeadm.go:310] W1004 03:18:37.074383     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 03:18:37.191769   30630 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 03:18:47.918991   30630 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 03:18:47.919112   30630 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 03:18:47.919261   30630 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 03:18:47.919365   30630 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 03:18:47.919464   30630 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 03:18:47.919518   30630 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 03:18:47.920818   30630 out.go:235]   - Generating certificates and keys ...
	I1004 03:18:47.920882   30630 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 03:18:47.920936   30630 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 03:18:47.921009   30630 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 03:18:47.921075   30630 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1004 03:18:47.921133   30630 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1004 03:18:47.921203   30630 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1004 03:18:47.921280   30630 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1004 03:18:47.921443   30630 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-994751 localhost] and IPs [192.168.39.65 127.0.0.1 ::1]
	I1004 03:18:47.921519   30630 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1004 03:18:47.921666   30630 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-994751 localhost] and IPs [192.168.39.65 127.0.0.1 ::1]
	I1004 03:18:47.921762   30630 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 03:18:47.921849   30630 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 03:18:47.921910   30630 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1004 03:18:47.922005   30630 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 03:18:47.922057   30630 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 03:18:47.922112   30630 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 03:18:47.922177   30630 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 03:18:47.922290   30630 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 03:18:47.922377   30630 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 03:18:47.922447   30630 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 03:18:47.922519   30630 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 03:18:47.923983   30630 out.go:235]   - Booting up control plane ...
	I1004 03:18:47.924085   30630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 03:18:47.924153   30630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 03:18:47.924208   30630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 03:18:47.924334   30630 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 03:18:47.924425   30630 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 03:18:47.924472   30630 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 03:18:47.924582   30630 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 03:18:47.924675   30630 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 03:18:47.924735   30630 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001267899s
	I1004 03:18:47.924846   30630 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 03:18:47.924901   30630 kubeadm.go:310] [api-check] The API server is healthy after 5.62627754s
	I1004 03:18:47.924992   30630 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 03:18:47.925097   30630 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 03:18:47.925151   30630 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 03:18:47.925310   30630 kubeadm.go:310] [mark-control-plane] Marking the node ha-994751 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 03:18:47.925388   30630 kubeadm.go:310] [bootstrap-token] Using token: t8dola.kmwzcq881z4dnfcq
	I1004 03:18:47.926624   30630 out.go:235]   - Configuring RBAC rules ...
	I1004 03:18:47.926738   30630 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 03:18:47.926809   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 03:18:47.926957   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 03:18:47.927140   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 03:18:47.927310   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 03:18:47.927398   30630 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 03:18:47.927508   30630 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 03:18:47.927559   30630 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 03:18:47.927607   30630 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 03:18:47.927613   30630 kubeadm.go:310] 
	I1004 03:18:47.927661   30630 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 03:18:47.927667   30630 kubeadm.go:310] 
	I1004 03:18:47.927736   30630 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 03:18:47.927742   30630 kubeadm.go:310] 
	I1004 03:18:47.927764   30630 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 03:18:47.927863   30630 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 03:18:47.927918   30630 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 03:18:47.927926   30630 kubeadm.go:310] 
	I1004 03:18:47.927996   30630 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 03:18:47.928006   30630 kubeadm.go:310] 
	I1004 03:18:47.928052   30630 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 03:18:47.928059   30630 kubeadm.go:310] 
	I1004 03:18:47.928102   30630 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 03:18:47.928189   30630 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 03:18:47.928261   30630 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 03:18:47.928268   30630 kubeadm.go:310] 
	I1004 03:18:47.928337   30630 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 03:18:47.928401   30630 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 03:18:47.928407   30630 kubeadm.go:310] 
	I1004 03:18:47.928480   30630 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t8dola.kmwzcq881z4dnfcq \
	I1004 03:18:47.928565   30630 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 \
	I1004 03:18:47.928587   30630 kubeadm.go:310] 	--control-plane 
	I1004 03:18:47.928593   30630 kubeadm.go:310] 
	I1004 03:18:47.928677   30630 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 03:18:47.928689   30630 kubeadm.go:310] 
	I1004 03:18:47.928756   30630 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t8dola.kmwzcq881z4dnfcq \
	I1004 03:18:47.928856   30630 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 
	I1004 03:18:47.928865   30630 cni.go:84] Creating CNI manager for ""
	I1004 03:18:47.928870   30630 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1004 03:18:47.930177   30630 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1004 03:18:47.931356   30630 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1004 03:18:47.936846   30630 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1004 03:18:47.936861   30630 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1004 03:18:47.954946   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 03:18:48.341839   30630 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 03:18:48.341927   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-994751 minikube.k8s.io/updated_at=2024_10_04T03_18_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-994751 minikube.k8s.io/primary=true
	I1004 03:18:48.341931   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:48.378883   30630 ops.go:34] apiserver oom_adj: -16
	I1004 03:18:48.535248   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:49.035895   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:49.535506   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:50.036160   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:50.536177   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:51.036074   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:51.535453   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:52.036318   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:52.141351   30630 kubeadm.go:1113] duration metric: took 3.799503635s to wait for elevateKubeSystemPrivileges
	I1004 03:18:52.141482   30630 kubeadm.go:394] duration metric: took 15.307852794s to StartCluster
	I1004 03:18:52.141506   30630 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:52.141595   30630 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:18:52.142340   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:52.142543   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 03:18:52.142540   30630 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:18:52.142619   30630 start.go:241] waiting for startup goroutines ...
	I1004 03:18:52.142559   30630 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 03:18:52.142650   30630 addons.go:69] Setting default-storageclass=true in profile "ha-994751"
	I1004 03:18:52.142673   30630 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-994751"
	I1004 03:18:52.142653   30630 addons.go:69] Setting storage-provisioner=true in profile "ha-994751"
	I1004 03:18:52.142785   30630 addons.go:234] Setting addon storage-provisioner=true in "ha-994751"
	I1004 03:18:52.142836   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:18:52.142751   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:18:52.143105   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.143135   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.143203   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.143243   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.158739   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38855
	I1004 03:18:52.159139   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.159746   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.159801   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.160123   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.160704   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.160750   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.163696   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35851
	I1004 03:18:52.164259   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.164849   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.164876   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.165236   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.165397   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:52.167571   30630 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:18:52.167892   30630 kapi.go:59] client config for ha-994751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key", CAFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 03:18:52.168431   30630 cert_rotation.go:140] Starting client certificate rotation controller
	I1004 03:18:52.168621   30630 addons.go:234] Setting addon default-storageclass=true in "ha-994751"
	I1004 03:18:52.168661   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:18:52.168962   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.168995   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.177647   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33667
	I1004 03:18:52.178272   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.178780   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.178807   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.179185   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.179369   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:52.181245   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:52.182949   30630 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 03:18:52.184312   30630 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 03:18:52.184328   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 03:18:52.184342   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:52.185802   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36897
	I1004 03:18:52.186249   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.186707   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.186731   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.187053   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.187403   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.187660   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.187699   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.187838   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:52.187860   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.188023   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:52.188171   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:52.188318   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:52.188522   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:52.202680   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36473
	I1004 03:18:52.203159   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.203886   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.203918   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.204247   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.204428   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:52.205967   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:52.206173   30630 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 03:18:52.206189   30630 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 03:18:52.206206   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:52.208832   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.209269   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:52.209304   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.209405   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:52.209567   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:52.209709   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:52.209838   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:52.346822   30630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 03:18:52.355141   30630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 03:18:52.371008   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 03:18:52.715722   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.715742   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.716027   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.716068   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.716084   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.716095   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.716104   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.716350   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.716358   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.716370   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.716432   30630 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1004 03:18:52.716457   30630 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1004 03:18:52.716568   30630 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1004 03:18:52.716579   30630 round_trippers.go:469] Request Headers:
	I1004 03:18:52.716592   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:18:52.716603   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:18:52.723900   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:18:52.724457   30630 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1004 03:18:52.724472   30630 round_trippers.go:469] Request Headers:
	I1004 03:18:52.724481   30630 round_trippers.go:473]     Content-Type: application/json
	I1004 03:18:52.724485   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:18:52.724494   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:18:52.728158   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:18:52.728358   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.728379   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.728631   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.728667   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.728678   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.991032   30630 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1004 03:18:52.991106   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.991118   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.991464   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.991518   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.991525   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.991538   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.991549   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.991787   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.991815   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.991835   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.993564   30630 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1004 03:18:52.994914   30630 addons.go:510] duration metric: took 852.347466ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1004 03:18:52.994963   30630 start.go:246] waiting for cluster config update ...
	I1004 03:18:52.994978   30630 start.go:255] writing updated cluster config ...
	I1004 03:18:52.996475   30630 out.go:201] 
	I1004 03:18:52.997828   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:18:52.997937   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:52.999684   30630 out.go:177] * Starting "ha-994751-m02" control-plane node in "ha-994751" cluster
	I1004 03:18:53.001098   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:18:53.001129   30630 cache.go:56] Caching tarball of preloaded images
	I1004 03:18:53.001252   30630 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:18:53.001270   30630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:18:53.001389   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:53.001704   30630 start.go:360] acquireMachinesLock for ha-994751-m02: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:18:53.001767   30630 start.go:364] duration metric: took 36.717µs to acquireMachinesLock for "ha-994751-m02"
	I1004 03:18:53.001788   30630 start.go:93] Provisioning new machine with config: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:18:53.001888   30630 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1004 03:18:53.003601   30630 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 03:18:53.003685   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:53.003710   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:53.018286   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I1004 03:18:53.018739   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:53.019227   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:53.019248   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:53.019586   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:53.019746   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:18:53.019878   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:18:53.020036   30630 start.go:159] libmachine.API.Create for "ha-994751" (driver="kvm2")
	I1004 03:18:53.020058   30630 client.go:168] LocalClient.Create starting
	I1004 03:18:53.020084   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 03:18:53.020121   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:53.020141   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:53.020189   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 03:18:53.020206   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:53.020216   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:53.020231   30630 main.go:141] libmachine: Running pre-create checks...
	I1004 03:18:53.020238   30630 main.go:141] libmachine: (ha-994751-m02) Calling .PreCreateCheck
	I1004 03:18:53.020407   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetConfigRaw
	I1004 03:18:53.020742   30630 main.go:141] libmachine: Creating machine...
	I1004 03:18:53.020759   30630 main.go:141] libmachine: (ha-994751-m02) Calling .Create
	I1004 03:18:53.020907   30630 main.go:141] libmachine: (ha-994751-m02) Creating KVM machine...
	I1004 03:18:53.022100   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found existing default KVM network
	I1004 03:18:53.022275   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found existing private KVM network mk-ha-994751
	I1004 03:18:53.022411   30630 main.go:141] libmachine: (ha-994751-m02) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02 ...
	I1004 03:18:53.022435   30630 main.go:141] libmachine: (ha-994751-m02) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 03:18:53.022495   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.022407   31016 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:53.022574   30630 main.go:141] libmachine: (ha-994751-m02) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 03:18:53.247842   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.247679   31016 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa...
	I1004 03:18:53.574709   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.574567   31016 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/ha-994751-m02.rawdisk...
	I1004 03:18:53.574744   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Writing magic tar header
	I1004 03:18:53.574759   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Writing SSH key tar header
	I1004 03:18:53.574776   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.574706   31016 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02 ...
	I1004 03:18:53.574856   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02
	I1004 03:18:53.574886   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02 (perms=drwx------)
	I1004 03:18:53.574896   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 03:18:53.574906   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 03:18:53.574926   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 03:18:53.574938   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 03:18:53.574962   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 03:18:53.574971   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:53.574979   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 03:18:53.574992   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 03:18:53.575005   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 03:18:53.575014   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins
	I1004 03:18:53.575020   30630 main.go:141] libmachine: (ha-994751-m02) Creating domain...
	I1004 03:18:53.575036   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home
	I1004 03:18:53.575046   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Skipping /home - not owner
	I1004 03:18:53.575952   30630 main.go:141] libmachine: (ha-994751-m02) define libvirt domain using xml: 
	I1004 03:18:53.575978   30630 main.go:141] libmachine: (ha-994751-m02) <domain type='kvm'>
	I1004 03:18:53.575998   30630 main.go:141] libmachine: (ha-994751-m02)   <name>ha-994751-m02</name>
	I1004 03:18:53.576012   30630 main.go:141] libmachine: (ha-994751-m02)   <memory unit='MiB'>2200</memory>
	I1004 03:18:53.576021   30630 main.go:141] libmachine: (ha-994751-m02)   <vcpu>2</vcpu>
	I1004 03:18:53.576030   30630 main.go:141] libmachine: (ha-994751-m02)   <features>
	I1004 03:18:53.576038   30630 main.go:141] libmachine: (ha-994751-m02)     <acpi/>
	I1004 03:18:53.576047   30630 main.go:141] libmachine: (ha-994751-m02)     <apic/>
	I1004 03:18:53.576055   30630 main.go:141] libmachine: (ha-994751-m02)     <pae/>
	I1004 03:18:53.576064   30630 main.go:141] libmachine: (ha-994751-m02)     
	I1004 03:18:53.576072   30630 main.go:141] libmachine: (ha-994751-m02)   </features>
	I1004 03:18:53.576082   30630 main.go:141] libmachine: (ha-994751-m02)   <cpu mode='host-passthrough'>
	I1004 03:18:53.576089   30630 main.go:141] libmachine: (ha-994751-m02)   
	I1004 03:18:53.576099   30630 main.go:141] libmachine: (ha-994751-m02)   </cpu>
	I1004 03:18:53.576106   30630 main.go:141] libmachine: (ha-994751-m02)   <os>
	I1004 03:18:53.576119   30630 main.go:141] libmachine: (ha-994751-m02)     <type>hvm</type>
	I1004 03:18:53.576130   30630 main.go:141] libmachine: (ha-994751-m02)     <boot dev='cdrom'/>
	I1004 03:18:53.576135   30630 main.go:141] libmachine: (ha-994751-m02)     <boot dev='hd'/>
	I1004 03:18:53.576144   30630 main.go:141] libmachine: (ha-994751-m02)     <bootmenu enable='no'/>
	I1004 03:18:53.576152   30630 main.go:141] libmachine: (ha-994751-m02)   </os>
	I1004 03:18:53.576165   30630 main.go:141] libmachine: (ha-994751-m02)   <devices>
	I1004 03:18:53.576176   30630 main.go:141] libmachine: (ha-994751-m02)     <disk type='file' device='cdrom'>
	I1004 03:18:53.576189   30630 main.go:141] libmachine: (ha-994751-m02)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/boot2docker.iso'/>
	I1004 03:18:53.576200   30630 main.go:141] libmachine: (ha-994751-m02)       <target dev='hdc' bus='scsi'/>
	I1004 03:18:53.576208   30630 main.go:141] libmachine: (ha-994751-m02)       <readonly/>
	I1004 03:18:53.576216   30630 main.go:141] libmachine: (ha-994751-m02)     </disk>
	I1004 03:18:53.576224   30630 main.go:141] libmachine: (ha-994751-m02)     <disk type='file' device='disk'>
	I1004 03:18:53.576236   30630 main.go:141] libmachine: (ha-994751-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 03:18:53.576251   30630 main.go:141] libmachine: (ha-994751-m02)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/ha-994751-m02.rawdisk'/>
	I1004 03:18:53.576261   30630 main.go:141] libmachine: (ha-994751-m02)       <target dev='hda' bus='virtio'/>
	I1004 03:18:53.576285   30630 main.go:141] libmachine: (ha-994751-m02)     </disk>
	I1004 03:18:53.576307   30630 main.go:141] libmachine: (ha-994751-m02)     <interface type='network'>
	I1004 03:18:53.576317   30630 main.go:141] libmachine: (ha-994751-m02)       <source network='mk-ha-994751'/>
	I1004 03:18:53.576324   30630 main.go:141] libmachine: (ha-994751-m02)       <model type='virtio'/>
	I1004 03:18:53.576335   30630 main.go:141] libmachine: (ha-994751-m02)     </interface>
	I1004 03:18:53.576342   30630 main.go:141] libmachine: (ha-994751-m02)     <interface type='network'>
	I1004 03:18:53.576368   30630 main.go:141] libmachine: (ha-994751-m02)       <source network='default'/>
	I1004 03:18:53.576386   30630 main.go:141] libmachine: (ha-994751-m02)       <model type='virtio'/>
	I1004 03:18:53.576401   30630 main.go:141] libmachine: (ha-994751-m02)     </interface>
	I1004 03:18:53.576413   30630 main.go:141] libmachine: (ha-994751-m02)     <serial type='pty'>
	I1004 03:18:53.576421   30630 main.go:141] libmachine: (ha-994751-m02)       <target port='0'/>
	I1004 03:18:53.576429   30630 main.go:141] libmachine: (ha-994751-m02)     </serial>
	I1004 03:18:53.576437   30630 main.go:141] libmachine: (ha-994751-m02)     <console type='pty'>
	I1004 03:18:53.576447   30630 main.go:141] libmachine: (ha-994751-m02)       <target type='serial' port='0'/>
	I1004 03:18:53.576455   30630 main.go:141] libmachine: (ha-994751-m02)     </console>
	I1004 03:18:53.576462   30630 main.go:141] libmachine: (ha-994751-m02)     <rng model='virtio'>
	I1004 03:18:53.576468   30630 main.go:141] libmachine: (ha-994751-m02)       <backend model='random'>/dev/random</backend>
	I1004 03:18:53.576474   30630 main.go:141] libmachine: (ha-994751-m02)     </rng>
	I1004 03:18:53.576479   30630 main.go:141] libmachine: (ha-994751-m02)     
	I1004 03:18:53.576482   30630 main.go:141] libmachine: (ha-994751-m02)     
	I1004 03:18:53.576487   30630 main.go:141] libmachine: (ha-994751-m02)   </devices>
	I1004 03:18:53.576497   30630 main.go:141] libmachine: (ha-994751-m02) </domain>
	I1004 03:18:53.576508   30630 main.go:141] libmachine: (ha-994751-m02) 
	I1004 03:18:53.583962   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:dd:b1:40 in network default
	I1004 03:18:53.584709   30630 main.go:141] libmachine: (ha-994751-m02) Ensuring networks are active...
	I1004 03:18:53.584740   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:53.585441   30630 main.go:141] libmachine: (ha-994751-m02) Ensuring network default is active
	I1004 03:18:53.585785   30630 main.go:141] libmachine: (ha-994751-m02) Ensuring network mk-ha-994751 is active
	I1004 03:18:53.586177   30630 main.go:141] libmachine: (ha-994751-m02) Getting domain xml...
	I1004 03:18:53.586870   30630 main.go:141] libmachine: (ha-994751-m02) Creating domain...
	I1004 03:18:54.836669   30630 main.go:141] libmachine: (ha-994751-m02) Waiting to get IP...
	I1004 03:18:54.837648   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:54.838068   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:54.838093   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:54.838048   31016 retry.go:31] will retry after 198.927613ms: waiting for machine to come up
	I1004 03:18:55.038453   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:55.038905   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:55.039050   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:55.039003   31016 retry.go:31] will retry after 306.415928ms: waiting for machine to come up
	I1004 03:18:55.347491   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:55.347913   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:55.347941   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:55.347876   31016 retry.go:31] will retry after 320.808758ms: waiting for machine to come up
	I1004 03:18:55.670381   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:55.670806   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:55.670832   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:55.670773   31016 retry.go:31] will retry after 393.714723ms: waiting for machine to come up
	I1004 03:18:56.066334   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:56.066789   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:56.066816   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:56.066737   31016 retry.go:31] will retry after 703.186123ms: waiting for machine to come up
	I1004 03:18:56.771284   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:56.771771   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:56.771816   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:56.771717   31016 retry.go:31] will retry after 687.11987ms: waiting for machine to come up
	I1004 03:18:57.460710   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:57.461089   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:57.461132   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:57.461080   31016 retry.go:31] will retry after 992.439827ms: waiting for machine to come up
	I1004 03:18:58.455669   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:58.456094   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:58.456109   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:58.456062   31016 retry.go:31] will retry after 1.176479657s: waiting for machine to come up
	I1004 03:18:59.634390   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:59.634814   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:59.634839   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:59.634775   31016 retry.go:31] will retry after 1.214254179s: waiting for machine to come up
	I1004 03:19:00.850238   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:00.850699   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:00.850731   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:00.850669   31016 retry.go:31] will retry after 1.755607467s: waiting for machine to come up
	I1004 03:19:02.608547   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:02.608946   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:02.608966   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:02.608910   31016 retry.go:31] will retry after 1.912286614s: waiting for machine to come up
	I1004 03:19:04.522463   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:04.522888   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:04.522917   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:04.522826   31016 retry.go:31] will retry after 2.242710645s: waiting for machine to come up
	I1004 03:19:06.766980   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:06.767510   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:06.767541   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:06.767449   31016 retry.go:31] will retry after 3.842874805s: waiting for machine to come up
	I1004 03:19:10.612857   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:10.613334   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:10.613359   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:10.613293   31016 retry.go:31] will retry after 4.05361864s: waiting for machine to come up
	I1004 03:19:14.669514   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.670029   30630 main.go:141] libmachine: (ha-994751-m02) Found IP for machine: 192.168.39.117
	I1004 03:19:14.670051   30630 main.go:141] libmachine: (ha-994751-m02) Reserving static IP address...
	I1004 03:19:14.670068   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has current primary IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.670622   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find host DHCP lease matching {name: "ha-994751-m02", mac: "52:54:00:b0:e7:80", ip: "192.168.39.117"} in network mk-ha-994751
	I1004 03:19:14.745981   30630 main.go:141] libmachine: (ha-994751-m02) Reserved static IP address: 192.168.39.117
	I1004 03:19:14.746008   30630 main.go:141] libmachine: (ha-994751-m02) Waiting for SSH to be available...
	I1004 03:19:14.746017   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Getting to WaitForSSH function...
	I1004 03:19:14.748804   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.749281   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:14.749310   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.749511   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Using SSH client type: external
	I1004 03:19:14.749551   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa (-rw-------)
	I1004 03:19:14.749581   30630 main.go:141] libmachine: (ha-994751-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:19:14.749606   30630 main.go:141] libmachine: (ha-994751-m02) DBG | About to run SSH command:
	I1004 03:19:14.749624   30630 main.go:141] libmachine: (ha-994751-m02) DBG | exit 0
	I1004 03:19:14.876139   30630 main.go:141] libmachine: (ha-994751-m02) DBG | SSH cmd err, output: <nil>: 
	I1004 03:19:14.876447   30630 main.go:141] libmachine: (ha-994751-m02) KVM machine creation complete!
	I1004 03:19:14.876809   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetConfigRaw
	I1004 03:19:14.877356   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:14.877589   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:14.877768   30630 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 03:19:14.877780   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetState
	I1004 03:19:14.879122   30630 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 03:19:14.879138   30630 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 03:19:14.879143   30630 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 03:19:14.879149   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:14.881593   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.881953   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:14.881980   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.882095   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:14.882322   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.882470   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.882643   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:14.882838   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:14.883073   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:14.883086   30630 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 03:19:14.983285   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:19:14.983306   30630 main.go:141] libmachine: Detecting the provisioner...
	I1004 03:19:14.983312   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:14.986285   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.986741   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:14.986757   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.987055   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:14.987278   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.987439   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.987656   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:14.987873   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:14.988031   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:14.988042   30630 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 03:19:15.088950   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 03:19:15.089011   30630 main.go:141] libmachine: found compatible host: buildroot
	I1004 03:19:15.089017   30630 main.go:141] libmachine: Provisioning with buildroot...
	I1004 03:19:15.089024   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:19:15.089254   30630 buildroot.go:166] provisioning hostname "ha-994751-m02"
	I1004 03:19:15.089274   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:19:15.089431   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.092470   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.092890   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.092918   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.093111   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.093289   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.093421   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.093532   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.093663   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:15.093819   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:15.093835   30630 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-994751-m02 && echo "ha-994751-m02" | sudo tee /etc/hostname
	I1004 03:19:15.206985   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751-m02
	
	I1004 03:19:15.207013   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.210129   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.210417   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.210457   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.210609   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.210806   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.210951   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.211140   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.211322   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:15.211488   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:15.211503   30630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-994751-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-994751-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-994751-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:19:15.321696   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:19:15.321728   30630 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:19:15.321748   30630 buildroot.go:174] setting up certificates
	I1004 03:19:15.321761   30630 provision.go:84] configureAuth start
	I1004 03:19:15.321773   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:19:15.322055   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:15.324655   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.325067   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.325090   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.325226   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.327479   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.327889   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.327929   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.328106   30630 provision.go:143] copyHostCerts
	I1004 03:19:15.328139   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:19:15.328171   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:19:15.328185   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:19:15.328272   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:19:15.328393   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:19:15.328420   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:19:15.328430   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:19:15.328468   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:19:15.328620   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:19:15.328652   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:19:15.328662   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:19:15.328718   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:19:15.328821   30630 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.ha-994751-m02 san=[127.0.0.1 192.168.39.117 ha-994751-m02 localhost minikube]
	I1004 03:19:15.560527   30630 provision.go:177] copyRemoteCerts
	I1004 03:19:15.560590   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:19:15.560612   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.563747   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.564236   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.564307   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.564520   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.564706   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.564861   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.565036   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:15.646851   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:19:15.646919   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 03:19:15.672945   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:19:15.673021   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:19:15.699880   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:19:15.699960   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:19:15.725929   30630 provision.go:87] duration metric: took 404.139584ms to configureAuth
	I1004 03:19:15.725975   30630 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:19:15.726189   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:19:15.726282   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.729150   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.729586   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.729623   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.729761   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.729951   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.730107   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.730283   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.730477   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:15.730682   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:15.730704   30630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:19:15.953783   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:19:15.953808   30630 main.go:141] libmachine: Checking connection to Docker...
	I1004 03:19:15.953817   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetURL
	I1004 03:19:15.955088   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Using libvirt version 6000000
	I1004 03:19:15.957213   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.957617   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.957642   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.957827   30630 main.go:141] libmachine: Docker is up and running!
	I1004 03:19:15.957841   30630 main.go:141] libmachine: Reticulating splines...
	I1004 03:19:15.957847   30630 client.go:171] duration metric: took 22.937783647s to LocalClient.Create
	I1004 03:19:15.957867   30630 start.go:167] duration metric: took 22.937832099s to libmachine.API.Create "ha-994751"
	I1004 03:19:15.957875   30630 start.go:293] postStartSetup for "ha-994751-m02" (driver="kvm2")
	I1004 03:19:15.957884   30630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:19:15.957899   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:15.958102   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:19:15.958124   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.960392   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.960717   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.960745   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.960883   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.961062   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.961225   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.961368   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:16.042404   30630 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:19:16.047363   30630 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:19:16.047388   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:19:16.047468   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:19:16.047535   30630 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:19:16.047546   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:19:16.047622   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:19:16.057062   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:19:16.082885   30630 start.go:296] duration metric: took 124.998047ms for postStartSetup
	I1004 03:19:16.082935   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetConfigRaw
	I1004 03:19:16.083581   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:16.086204   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.086582   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.086605   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.086841   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:19:16.087032   30630 start.go:128] duration metric: took 23.085132614s to createHost
	I1004 03:19:16.087053   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:16.089417   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.089782   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.089807   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.089984   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:16.090129   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.090241   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.090315   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:16.090436   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:16.090606   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:16.090615   30630 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:19:16.192923   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011956.165669680
	
	I1004 03:19:16.192949   30630 fix.go:216] guest clock: 1728011956.165669680
	I1004 03:19:16.192957   30630 fix.go:229] Guest: 2024-10-04 03:19:16.16566968 +0000 UTC Remote: 2024-10-04 03:19:16.08704226 +0000 UTC m=+70.399873263 (delta=78.62742ms)
	I1004 03:19:16.192972   30630 fix.go:200] guest clock delta is within tolerance: 78.62742ms
	I1004 03:19:16.192978   30630 start.go:83] releasing machines lock for "ha-994751-m02", held for 23.191201934s
	I1004 03:19:16.193000   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.193291   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:16.196268   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.196769   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.196799   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.199156   30630 out.go:177] * Found network options:
	I1004 03:19:16.200650   30630 out.go:177]   - NO_PROXY=192.168.39.65
	W1004 03:19:16.201984   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:19:16.202013   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.202608   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.202783   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.202904   30630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:19:16.202945   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	W1004 03:19:16.203033   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:19:16.203114   30630 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:19:16.203136   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:16.205729   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.205978   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.206109   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.206134   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.206286   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:16.206384   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.206425   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.206455   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.206610   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:16.206681   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:16.206748   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:16.206786   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.206947   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:16.207052   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:16.451088   30630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 03:19:16.457611   30630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:19:16.457679   30630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:19:16.474500   30630 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 03:19:16.474524   30630 start.go:495] detecting cgroup driver to use...
	I1004 03:19:16.474577   30630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:19:16.491337   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:19:16.505852   30630 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:19:16.505915   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:19:16.519394   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:19:16.533389   30630 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:19:16.647440   30630 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:19:16.796026   30630 docker.go:233] disabling docker service ...
	I1004 03:19:16.796090   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:19:16.810390   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:19:16.824447   30630 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:19:16.967078   30630 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:19:17.099949   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:19:17.114752   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:19:17.134460   30630 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:19:17.134514   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.144920   30630 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:19:17.144984   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.155252   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.165315   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.175583   30630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:19:17.186303   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.198678   30630 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.217975   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.229419   30630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:19:17.241337   30630 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 03:19:17.241386   30630 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 03:19:17.254390   30630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:19:17.264806   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:19:17.402028   30630 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:19:17.495758   30630 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:19:17.495841   30630 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:19:17.500623   30630 start.go:563] Will wait 60s for crictl version
	I1004 03:19:17.500678   30630 ssh_runner.go:195] Run: which crictl
	I1004 03:19:17.504705   30630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:19:17.550368   30630 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:19:17.550468   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:19:17.578910   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:19:17.612824   30630 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:19:17.614302   30630 out.go:177]   - env NO_PROXY=192.168.39.65
	I1004 03:19:17.615583   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:17.618499   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:17.619022   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:17.619049   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:17.619276   30630 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:19:17.623687   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:19:17.636797   30630 mustload.go:65] Loading cluster: ha-994751
	I1004 03:19:17.637003   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:19:17.637273   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:19:17.637322   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:19:17.651836   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38711
	I1004 03:19:17.652278   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:19:17.652784   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:19:17.652801   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:19:17.653111   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:19:17.653311   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:19:17.654878   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:19:17.655231   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:19:17.655273   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:19:17.669844   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I1004 03:19:17.670238   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:19:17.670702   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:19:17.670716   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:19:17.671055   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:19:17.671261   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:19:17.671448   30630 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751 for IP: 192.168.39.117
	I1004 03:19:17.671472   30630 certs.go:194] generating shared ca certs ...
	I1004 03:19:17.671486   30630 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:19:17.671619   30630 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:19:17.671665   30630 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:19:17.671678   30630 certs.go:256] generating profile certs ...
	I1004 03:19:17.671769   30630 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key
	I1004 03:19:17.671816   30630 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb
	I1004 03:19:17.671836   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65 192.168.39.117 192.168.39.254]
	I1004 03:19:17.982961   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb ...
	I1004 03:19:17.982990   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb: {Name:mka857c573044186dc7f952f5b2ab8a540e4e52f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:19:17.983170   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb ...
	I1004 03:19:17.983188   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb: {Name:mka872bfad80f36ccf6cfb0285b019b3212263dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:19:17.983268   30630 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt
	I1004 03:19:17.983413   30630 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key
	I1004 03:19:17.983593   30630 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key
	I1004 03:19:17.983610   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:19:17.983628   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:19:17.983649   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:19:17.983666   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:19:17.983685   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:19:17.983700   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:19:17.983717   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:19:17.983736   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:19:17.983821   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:19:17.983865   30630 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:19:17.983877   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:19:17.983909   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:19:17.983943   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:19:17.984054   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:19:17.984129   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:19:17.984175   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:19:17.984197   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:17.984216   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:19:17.984276   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:19:17.987517   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:17.987891   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:19:17.987919   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:17.988138   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:19:17.988361   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:19:17.988505   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:19:17.988670   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:19:18.060182   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1004 03:19:18.065324   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1004 03:19:18.078017   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1004 03:19:18.082669   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1004 03:19:18.094668   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1004 03:19:18.099036   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1004 03:19:18.110596   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1004 03:19:18.115397   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1004 03:19:18.126291   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1004 03:19:18.131864   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1004 03:19:18.143496   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1004 03:19:18.147678   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1004 03:19:18.158714   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:19:18.185425   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:19:18.212989   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:19:18.238721   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:19:18.265688   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1004 03:19:18.292564   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 03:19:18.318046   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:19:18.343621   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:19:18.367533   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:19:18.391460   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:19:18.414533   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:19:18.437881   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1004 03:19:18.454162   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1004 03:19:18.470435   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1004 03:19:18.487697   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1004 03:19:18.504422   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1004 03:19:18.521609   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1004 03:19:18.538712   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1004 03:19:18.555759   30630 ssh_runner.go:195] Run: openssl version
	I1004 03:19:18.561485   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:19:18.572838   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:19:18.578085   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:19:18.578150   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:19:18.584699   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:19:18.596515   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:19:18.608107   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:18.613090   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:18.613151   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:18.619060   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:19:18.630222   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:19:18.642211   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:19:18.646675   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:19:18.646733   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:19:18.652690   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:19:18.663892   30630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:19:18.668101   30630 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 03:19:18.668177   30630 kubeadm.go:934] updating node {m02 192.168.39.117 8443 v1.31.1 crio true true} ...
	I1004 03:19:18.668262   30630 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-994751-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:19:18.668287   30630 kube-vip.go:115] generating kube-vip config ...
	I1004 03:19:18.668368   30630 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1004 03:19:18.686599   30630 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1004 03:19:18.686662   30630 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:19:18.686715   30630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:19:18.697844   30630 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1004 03:19:18.697908   30630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1004 03:19:18.708942   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1004 03:19:18.708972   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:19:18.708991   30630 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1004 03:19:18.709028   30630 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1004 03:19:18.709031   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:19:18.713612   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1004 03:19:18.713636   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1004 03:19:19.809158   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:19:19.826203   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:19:19.826314   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:19:19.830837   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1004 03:19:19.830871   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1004 03:19:19.978327   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:19:19.978413   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:19:19.988543   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1004 03:19:19.988589   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1004 03:19:20.364768   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1004 03:19:20.374518   30630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1004 03:19:20.391501   30630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:19:20.408160   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1004 03:19:20.424511   30630 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:19:20.428280   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:19:20.439917   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:19:20.559800   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:19:20.576330   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:19:20.576654   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:19:20.576692   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:19:20.592425   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I1004 03:19:20.593014   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:19:20.593564   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:19:20.593590   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:19:20.593896   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:19:20.594067   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:19:20.594173   30630 start.go:317] joinCluster: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:19:20.594288   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1004 03:19:20.594307   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:19:20.597288   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:20.597706   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:19:20.597738   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:20.597851   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:19:20.598146   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:19:20.598359   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:19:20.598601   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:19:20.751261   30630 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:19:20.751313   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tfpvu2.gfmxns87jp8m6lea --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m02 --control-plane --apiserver-advertise-address=192.168.39.117 --apiserver-bind-port=8443"
	I1004 03:19:42.477327   30630 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tfpvu2.gfmxns87jp8m6lea --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m02 --control-plane --apiserver-advertise-address=192.168.39.117 --apiserver-bind-port=8443": (21.725989536s)
	I1004 03:19:42.477374   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1004 03:19:43.011388   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-994751-m02 minikube.k8s.io/updated_at=2024_10_04T03_19_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-994751 minikube.k8s.io/primary=false
	I1004 03:19:43.128289   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-994751-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1004 03:19:43.240778   30630 start.go:319] duration metric: took 22.646600164s to joinCluster
	I1004 03:19:43.240848   30630 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:19:43.241147   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:19:43.242449   30630 out.go:177] * Verifying Kubernetes components...
	I1004 03:19:43.243651   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:19:43.505854   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:19:43.526989   30630 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:19:43.527348   30630 kapi.go:59] client config for ha-994751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key", CAFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1004 03:19:43.527435   30630 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.65:8443
	I1004 03:19:43.527706   30630 node_ready.go:35] waiting up to 6m0s for node "ha-994751-m02" to be "Ready" ...
	I1004 03:19:43.527836   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:43.527848   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:43.527859   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:43.527864   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:43.538086   30630 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1004 03:19:44.028570   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:44.028592   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:44.028599   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:44.028604   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:44.034683   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:19:44.528680   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:44.528707   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:44.528719   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:44.528727   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:44.532210   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:45.028095   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:45.028116   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:45.028124   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:45.028128   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:45.031650   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:45.528659   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:45.528681   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:45.528689   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:45.528693   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:45.532032   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:45.532726   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:46.028184   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:46.028208   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:46.028220   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:46.028224   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:46.031876   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:46.528850   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:46.528870   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:46.528878   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:46.528883   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:46.532535   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:47.028593   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:47.028614   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:47.028622   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:47.028625   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:47.032488   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:47.528380   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:47.528406   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:47.528417   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:47.528423   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:47.532834   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:47.533292   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:48.028846   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:48.028866   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:48.028876   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:48.028879   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:48.033387   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:48.527941   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:48.527965   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:48.527976   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:48.527982   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:48.531255   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:49.027941   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:49.027974   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:49.027982   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:49.027985   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:49.032078   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:49.527942   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:49.527977   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:49.527988   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:49.527993   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:49.531336   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:50.027938   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:50.027962   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:50.027970   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:50.027975   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:50.031574   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:50.032261   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:50.528731   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:50.528756   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:50.528762   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:50.528766   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:50.533072   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:51.028280   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:51.028305   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:51.028315   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:51.028318   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:51.031958   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:51.527942   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:51.527963   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:51.527971   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:51.527975   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:51.531671   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:52.028715   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:52.028739   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:52.028747   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:52.028752   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:52.032273   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:52.032782   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:52.528521   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:52.528543   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:52.528553   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:52.528556   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:52.532328   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:53.028497   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:53.028519   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:53.028533   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:53.028536   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:53.031845   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:53.527963   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:53.527986   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:53.527995   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:53.527999   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:53.531468   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:54.028502   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:54.028524   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:54.028533   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:54.028537   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:54.032380   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:54.032974   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:54.528253   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:54.528276   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:54.528286   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:54.528293   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:54.531649   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:55.028786   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:55.028804   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:55.028812   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:55.028817   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:55.032371   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:55.527931   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:55.527953   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:55.527961   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:55.527965   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:55.531477   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:56.028492   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:56.028512   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:56.028519   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:56.028524   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:56.031319   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:19:56.527963   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:56.527981   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:56.527990   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:56.527993   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:56.531347   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:56.531854   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:57.027943   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:57.027962   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:57.027970   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:57.027979   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:57.031176   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:57.527972   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:57.527995   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:57.528006   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:57.528011   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:57.531355   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:58.028084   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:58.028103   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:58.028111   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:58.028115   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:58.034080   30630 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 03:19:58.527939   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:58.527959   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:58.527967   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:58.527972   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:58.530892   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:19:59.027908   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:59.027929   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:59.027938   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:59.027943   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:59.031093   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:59.031750   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:59.528117   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:59.528140   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:59.528148   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:59.528152   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:59.531338   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.027934   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:00.027956   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.027964   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.027968   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.031243   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.527969   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:00.527990   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.527998   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.528002   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.535322   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:20:00.536101   30630 node_ready.go:49] node "ha-994751-m02" has status "Ready":"True"
	I1004 03:20:00.536141   30630 node_ready.go:38] duration metric: took 17.008396711s for node "ha-994751-m02" to be "Ready" ...
	I1004 03:20:00.536154   30630 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:20:00.536255   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:00.536269   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.536281   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.536287   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.550231   30630 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1004 03:20:00.558943   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.559041   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-l6zst
	I1004 03:20:00.559052   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.559063   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.559071   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.562462   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.563534   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.563551   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.563558   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.563562   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.566458   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.567373   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.567390   30630 pod_ready.go:82] duration metric: took 8.418573ms for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.567399   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.567443   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgdck
	I1004 03:20:00.567450   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.567457   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.567461   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.571010   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.572015   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.572028   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.572035   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.572040   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.574144   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.574637   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.574653   30630 pod_ready.go:82] duration metric: took 7.248385ms for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.574660   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.574701   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751
	I1004 03:20:00.574708   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.574714   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.574718   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.577426   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.578237   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.578256   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.578262   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.578268   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.581297   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.582104   30630 pod_ready.go:93] pod "etcd-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.582124   30630 pod_ready.go:82] duration metric: took 7.457921ms for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.582136   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.582194   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m02
	I1004 03:20:00.582206   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.582213   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.582218   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.584954   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.586074   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:00.586089   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.586096   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.586098   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.588315   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.588797   30630 pod_ready.go:93] pod "etcd-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.588819   30630 pod_ready.go:82] duration metric: took 6.675728ms for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.588836   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.728447   30630 request.go:632] Waited for 139.544334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:20:00.728509   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:20:00.728514   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.728522   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.728527   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.732242   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.928492   30630 request.go:632] Waited for 195.478493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.928550   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.928556   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.928563   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.928567   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.932014   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.932660   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.932680   30630 pod_ready.go:82] duration metric: took 343.837498ms for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.932690   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.128708   30630 request.go:632] Waited for 195.949159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:20:01.128769   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:20:01.128778   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.128786   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.128790   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.131924   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.328936   30630 request.go:632] Waited for 196.247417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:01.328982   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:01.328986   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.328993   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.328999   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.332116   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.332718   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:01.332735   30630 pod_ready.go:82] duration metric: took 400.039408ms for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.332744   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.528985   30630 request.go:632] Waited for 196.178172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:20:01.529051   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:20:01.529057   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.529064   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.529068   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.532813   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.728751   30630 request.go:632] Waited for 195.374296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:01.728822   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:01.728828   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.728835   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.728838   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.732685   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.733267   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:01.733284   30630 pod_ready.go:82] duration metric: took 400.533757ms for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.733292   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.928444   30630 request.go:632] Waited for 195.093384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:20:01.928511   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:20:01.928517   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.928523   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.928531   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.931659   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.128724   30630 request.go:632] Waited for 196.347214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.128778   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.128783   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.128789   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.128794   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.132222   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.132803   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:02.132822   30630 pod_ready.go:82] duration metric: took 399.524177ms for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.132832   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.328210   30630 request.go:632] Waited for 195.309099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:20:02.328274   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:20:02.328281   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.328288   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.328293   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.331313   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.528409   30630 request.go:632] Waited for 196.390078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:02.528468   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:02.528474   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.528481   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.528486   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.531912   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.532422   30630 pod_ready.go:93] pod "kube-proxy-f44b9" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:02.532446   30630 pod_ready.go:82] duration metric: took 399.600972ms for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.532455   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.728449   30630 request.go:632] Waited for 195.932314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:20:02.728525   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:20:02.728531   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.728539   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.728547   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.732138   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.928159   30630 request.go:632] Waited for 195.316789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.928222   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.928227   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.928234   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.928238   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.931607   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.932124   30630 pod_ready.go:93] pod "kube-proxy-ph6cf" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:02.932148   30630 pod_ready.go:82] duration metric: took 399.687611ms for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.932157   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.128514   30630 request.go:632] Waited for 196.295312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:20:03.128566   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:20:03.128571   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.128579   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.128585   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.131954   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.328958   30630 request.go:632] Waited for 196.406685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:03.329017   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:03.329023   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.329031   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.329039   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.332357   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.332971   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:03.332988   30630 pod_ready.go:82] duration metric: took 400.824355ms for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.332997   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.528105   30630 request.go:632] Waited for 195.029512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:20:03.528157   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:20:03.528162   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.528169   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.528173   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.531733   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.727947   30630 request.go:632] Waited for 195.304105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:03.728022   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:03.728029   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.728038   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.728046   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.731222   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.731799   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:03.731823   30630 pod_ready.go:82] duration metric: took 398.818433ms for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.731836   30630 pod_ready.go:39] duration metric: took 3.195663558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:20:03.731854   30630 api_server.go:52] waiting for apiserver process to appear ...
	I1004 03:20:03.731914   30630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:20:03.748156   30630 api_server.go:72] duration metric: took 20.507274316s to wait for apiserver process to appear ...
	I1004 03:20:03.748186   30630 api_server.go:88] waiting for apiserver healthz status ...
	I1004 03:20:03.748208   30630 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I1004 03:20:03.752562   30630 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I1004 03:20:03.752615   30630 round_trippers.go:463] GET https://192.168.39.65:8443/version
	I1004 03:20:03.752620   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.752627   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.752633   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.753368   30630 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1004 03:20:03.753569   30630 api_server.go:141] control plane version: v1.31.1
	I1004 03:20:03.753592   30630 api_server.go:131] duration metric: took 5.397003ms to wait for apiserver health ...
	I1004 03:20:03.753601   30630 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 03:20:03.928947   30630 request.go:632] Waited for 175.282043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:03.929032   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:03.929040   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.929049   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.929055   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.934063   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:20:03.938318   30630 system_pods.go:59] 17 kube-system pods found
	I1004 03:20:03.938350   30630 system_pods.go:61] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:20:03.938358   30630 system_pods.go:61] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:20:03.938363   30630 system_pods.go:61] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:20:03.938369   30630 system_pods.go:61] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:20:03.938373   30630 system_pods.go:61] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:20:03.938378   30630 system_pods.go:61] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:20:03.938383   30630 system_pods.go:61] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:20:03.938387   30630 system_pods.go:61] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:20:03.938392   30630 system_pods.go:61] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:20:03.938397   30630 system_pods.go:61] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:20:03.938402   30630 system_pods.go:61] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:20:03.938408   30630 system_pods.go:61] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:20:03.938416   30630 system_pods.go:61] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:20:03.938422   30630 system_pods.go:61] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:20:03.938430   30630 system_pods.go:61] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:20:03.938435   30630 system_pods.go:61] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:20:03.938440   30630 system_pods.go:61] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:20:03.938450   30630 system_pods.go:74] duration metric: took 184.842668ms to wait for pod list to return data ...
	I1004 03:20:03.938469   30630 default_sa.go:34] waiting for default service account to be created ...
	I1004 03:20:04.128894   30630 request.go:632] Waited for 190.327691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:20:04.128944   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:20:04.128949   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:04.128956   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:04.128960   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:04.132905   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:04.133105   30630 default_sa.go:45] found service account: "default"
	I1004 03:20:04.133122   30630 default_sa.go:55] duration metric: took 194.645917ms for default service account to be created ...
	I1004 03:20:04.133132   30630 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 03:20:04.328598   30630 request.go:632] Waited for 195.393579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:04.328702   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:04.328730   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:04.328744   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:04.328753   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:04.333188   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:20:04.337805   30630 system_pods.go:86] 17 kube-system pods found
	I1004 03:20:04.337832   30630 system_pods.go:89] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:20:04.337838   30630 system_pods.go:89] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:20:04.337842   30630 system_pods.go:89] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:20:04.337848   30630 system_pods.go:89] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:20:04.337851   30630 system_pods.go:89] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:20:04.337855   30630 system_pods.go:89] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:20:04.337859   30630 system_pods.go:89] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:20:04.337863   30630 system_pods.go:89] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:20:04.337867   30630 system_pods.go:89] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:20:04.337874   30630 system_pods.go:89] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:20:04.337878   30630 system_pods.go:89] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:20:04.337885   30630 system_pods.go:89] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:20:04.337889   30630 system_pods.go:89] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:20:04.337901   30630 system_pods.go:89] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:20:04.337904   30630 system_pods.go:89] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:20:04.337907   30630 system_pods.go:89] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:20:04.337912   30630 system_pods.go:89] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:20:04.337921   30630 system_pods.go:126] duration metric: took 204.78361ms to wait for k8s-apps to be running ...
	I1004 03:20:04.337930   30630 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 03:20:04.337975   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:20:04.352705   30630 system_svc.go:56] duration metric: took 14.766178ms WaitForService to wait for kubelet
	I1004 03:20:04.352728   30630 kubeadm.go:582] duration metric: took 21.111850874s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:20:04.352744   30630 node_conditions.go:102] verifying NodePressure condition ...
	I1004 03:20:04.528049   30630 request.go:632] Waited for 175.240806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes
	I1004 03:20:04.528140   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes
	I1004 03:20:04.528148   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:04.528158   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:04.528166   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:04.532040   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:04.532645   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:20:04.532668   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:20:04.532682   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:20:04.532689   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:20:04.532696   30630 node_conditions.go:105] duration metric: took 179.947049ms to run NodePressure ...
	I1004 03:20:04.532711   30630 start.go:241] waiting for startup goroutines ...
	I1004 03:20:04.532748   30630 start.go:255] writing updated cluster config ...
	I1004 03:20:04.534798   30630 out.go:201] 
	I1004 03:20:04.536250   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:04.536346   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:20:04.537713   30630 out.go:177] * Starting "ha-994751-m03" control-plane node in "ha-994751" cluster
	I1004 03:20:04.538772   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:20:04.538791   30630 cache.go:56] Caching tarball of preloaded images
	I1004 03:20:04.538881   30630 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:20:04.538892   30630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:20:04.538970   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:20:04.539124   30630 start.go:360] acquireMachinesLock for ha-994751-m03: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:20:04.539179   30630 start.go:364] duration metric: took 32.772µs to acquireMachinesLock for "ha-994751-m03"
	I1004 03:20:04.539202   30630 start.go:93] Provisioning new machine with config: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:20:04.539327   30630 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1004 03:20:04.540776   30630 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 03:20:04.540857   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:04.540889   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:04.555427   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	I1004 03:20:04.555831   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:04.556364   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:04.556394   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:04.556738   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:04.556921   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:04.557038   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:04.557175   30630 start.go:159] libmachine.API.Create for "ha-994751" (driver="kvm2")
	I1004 03:20:04.557204   30630 client.go:168] LocalClient.Create starting
	I1004 03:20:04.557233   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 03:20:04.557271   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:20:04.557291   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:20:04.557375   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 03:20:04.557421   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:20:04.557449   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:20:04.557481   30630 main.go:141] libmachine: Running pre-create checks...
	I1004 03:20:04.557495   30630 main.go:141] libmachine: (ha-994751-m03) Calling .PreCreateCheck
	I1004 03:20:04.557705   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetConfigRaw
	I1004 03:20:04.558081   30630 main.go:141] libmachine: Creating machine...
	I1004 03:20:04.558096   30630 main.go:141] libmachine: (ha-994751-m03) Calling .Create
	I1004 03:20:04.558257   30630 main.go:141] libmachine: (ha-994751-m03) Creating KVM machine...
	I1004 03:20:04.559668   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found existing default KVM network
	I1004 03:20:04.559869   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found existing private KVM network mk-ha-994751
	I1004 03:20:04.560039   30630 main.go:141] libmachine: (ha-994751-m03) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03 ...
	I1004 03:20:04.560065   30630 main.go:141] libmachine: (ha-994751-m03) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 03:20:04.560110   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:04.560016   31400 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:20:04.560192   30630 main.go:141] libmachine: (ha-994751-m03) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 03:20:04.808276   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:04.808145   31400 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa...
	I1004 03:20:05.005812   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:05.005703   31400 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/ha-994751-m03.rawdisk...
	I1004 03:20:05.005838   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Writing magic tar header
	I1004 03:20:05.005848   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Writing SSH key tar header
	I1004 03:20:05.005856   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:05.005807   31400 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03 ...
	I1004 03:20:05.005932   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03
	I1004 03:20:05.005971   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 03:20:05.006001   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03 (perms=drwx------)
	I1004 03:20:05.006011   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:20:05.006021   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 03:20:05.006034   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 03:20:05.006047   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 03:20:05.006063   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 03:20:05.006075   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 03:20:05.006086   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 03:20:05.006100   30630 main.go:141] libmachine: (ha-994751-m03) Creating domain...
	I1004 03:20:05.006109   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 03:20:05.006122   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins
	I1004 03:20:05.006135   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home
	I1004 03:20:05.006147   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Skipping /home - not owner
	I1004 03:20:05.007092   30630 main.go:141] libmachine: (ha-994751-m03) define libvirt domain using xml: 
	I1004 03:20:05.007116   30630 main.go:141] libmachine: (ha-994751-m03) <domain type='kvm'>
	I1004 03:20:05.007126   30630 main.go:141] libmachine: (ha-994751-m03)   <name>ha-994751-m03</name>
	I1004 03:20:05.007139   30630 main.go:141] libmachine: (ha-994751-m03)   <memory unit='MiB'>2200</memory>
	I1004 03:20:05.007151   30630 main.go:141] libmachine: (ha-994751-m03)   <vcpu>2</vcpu>
	I1004 03:20:05.007158   30630 main.go:141] libmachine: (ha-994751-m03)   <features>
	I1004 03:20:05.007166   30630 main.go:141] libmachine: (ha-994751-m03)     <acpi/>
	I1004 03:20:05.007173   30630 main.go:141] libmachine: (ha-994751-m03)     <apic/>
	I1004 03:20:05.007177   30630 main.go:141] libmachine: (ha-994751-m03)     <pae/>
	I1004 03:20:05.007183   30630 main.go:141] libmachine: (ha-994751-m03)     
	I1004 03:20:05.007189   30630 main.go:141] libmachine: (ha-994751-m03)   </features>
	I1004 03:20:05.007198   30630 main.go:141] libmachine: (ha-994751-m03)   <cpu mode='host-passthrough'>
	I1004 03:20:05.007205   30630 main.go:141] libmachine: (ha-994751-m03)   
	I1004 03:20:05.007209   30630 main.go:141] libmachine: (ha-994751-m03)   </cpu>
	I1004 03:20:05.007231   30630 main.go:141] libmachine: (ha-994751-m03)   <os>
	I1004 03:20:05.007247   30630 main.go:141] libmachine: (ha-994751-m03)     <type>hvm</type>
	I1004 03:20:05.007256   30630 main.go:141] libmachine: (ha-994751-m03)     <boot dev='cdrom'/>
	I1004 03:20:05.007270   30630 main.go:141] libmachine: (ha-994751-m03)     <boot dev='hd'/>
	I1004 03:20:05.007282   30630 main.go:141] libmachine: (ha-994751-m03)     <bootmenu enable='no'/>
	I1004 03:20:05.007301   30630 main.go:141] libmachine: (ha-994751-m03)   </os>
	I1004 03:20:05.007312   30630 main.go:141] libmachine: (ha-994751-m03)   <devices>
	I1004 03:20:05.007323   30630 main.go:141] libmachine: (ha-994751-m03)     <disk type='file' device='cdrom'>
	I1004 03:20:05.007339   30630 main.go:141] libmachine: (ha-994751-m03)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/boot2docker.iso'/>
	I1004 03:20:05.007353   30630 main.go:141] libmachine: (ha-994751-m03)       <target dev='hdc' bus='scsi'/>
	I1004 03:20:05.007365   30630 main.go:141] libmachine: (ha-994751-m03)       <readonly/>
	I1004 03:20:05.007373   30630 main.go:141] libmachine: (ha-994751-m03)     </disk>
	I1004 03:20:05.007385   30630 main.go:141] libmachine: (ha-994751-m03)     <disk type='file' device='disk'>
	I1004 03:20:05.007397   30630 main.go:141] libmachine: (ha-994751-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 03:20:05.007412   30630 main.go:141] libmachine: (ha-994751-m03)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/ha-994751-m03.rawdisk'/>
	I1004 03:20:05.007427   30630 main.go:141] libmachine: (ha-994751-m03)       <target dev='hda' bus='virtio'/>
	I1004 03:20:05.007439   30630 main.go:141] libmachine: (ha-994751-m03)     </disk>
	I1004 03:20:05.007448   30630 main.go:141] libmachine: (ha-994751-m03)     <interface type='network'>
	I1004 03:20:05.007465   30630 main.go:141] libmachine: (ha-994751-m03)       <source network='mk-ha-994751'/>
	I1004 03:20:05.007474   30630 main.go:141] libmachine: (ha-994751-m03)       <model type='virtio'/>
	I1004 03:20:05.007484   30630 main.go:141] libmachine: (ha-994751-m03)     </interface>
	I1004 03:20:05.007498   30630 main.go:141] libmachine: (ha-994751-m03)     <interface type='network'>
	I1004 03:20:05.007510   30630 main.go:141] libmachine: (ha-994751-m03)       <source network='default'/>
	I1004 03:20:05.007520   30630 main.go:141] libmachine: (ha-994751-m03)       <model type='virtio'/>
	I1004 03:20:05.007530   30630 main.go:141] libmachine: (ha-994751-m03)     </interface>
	I1004 03:20:05.007540   30630 main.go:141] libmachine: (ha-994751-m03)     <serial type='pty'>
	I1004 03:20:05.007550   30630 main.go:141] libmachine: (ha-994751-m03)       <target port='0'/>
	I1004 03:20:05.007559   30630 main.go:141] libmachine: (ha-994751-m03)     </serial>
	I1004 03:20:05.007576   30630 main.go:141] libmachine: (ha-994751-m03)     <console type='pty'>
	I1004 03:20:05.007591   30630 main.go:141] libmachine: (ha-994751-m03)       <target type='serial' port='0'/>
	I1004 03:20:05.007600   30630 main.go:141] libmachine: (ha-994751-m03)     </console>
	I1004 03:20:05.007608   30630 main.go:141] libmachine: (ha-994751-m03)     <rng model='virtio'>
	I1004 03:20:05.007614   30630 main.go:141] libmachine: (ha-994751-m03)       <backend model='random'>/dev/random</backend>
	I1004 03:20:05.007620   30630 main.go:141] libmachine: (ha-994751-m03)     </rng>
	I1004 03:20:05.007628   30630 main.go:141] libmachine: (ha-994751-m03)     
	I1004 03:20:05.007636   30630 main.go:141] libmachine: (ha-994751-m03)     
	I1004 03:20:05.007652   30630 main.go:141] libmachine: (ha-994751-m03)   </devices>
	I1004 03:20:05.007666   30630 main.go:141] libmachine: (ha-994751-m03) </domain>
	I1004 03:20:05.007678   30630 main.go:141] libmachine: (ha-994751-m03) 
	I1004 03:20:05.014475   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:d0:97:18 in network default
	I1004 03:20:05.015005   30630 main.go:141] libmachine: (ha-994751-m03) Ensuring networks are active...
	I1004 03:20:05.015041   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:05.015645   30630 main.go:141] libmachine: (ha-994751-m03) Ensuring network default is active
	I1004 03:20:05.015928   30630 main.go:141] libmachine: (ha-994751-m03) Ensuring network mk-ha-994751 is active
	I1004 03:20:05.016249   30630 main.go:141] libmachine: (ha-994751-m03) Getting domain xml...
	I1004 03:20:05.016929   30630 main.go:141] libmachine: (ha-994751-m03) Creating domain...
	I1004 03:20:06.261440   30630 main.go:141] libmachine: (ha-994751-m03) Waiting to get IP...
	I1004 03:20:06.262071   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:06.262414   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:06.262472   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:06.262421   31400 retry.go:31] will retry after 250.348601ms: waiting for machine to come up
	I1004 03:20:06.515070   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:06.515535   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:06.515565   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:06.515468   31400 retry.go:31] will retry after 243.422578ms: waiting for machine to come up
	I1004 03:20:06.760919   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:06.761413   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:06.761440   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:06.761366   31400 retry.go:31] will retry after 323.138496ms: waiting for machine to come up
	I1004 03:20:07.085754   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:07.086220   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:07.086254   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:07.086174   31400 retry.go:31] will retry after 589.608599ms: waiting for machine to come up
	I1004 03:20:07.676793   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:07.677255   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:07.677277   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:07.677220   31400 retry.go:31] will retry after 686.955192ms: waiting for machine to come up
	I1004 03:20:08.365977   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:08.366366   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:08.366390   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:08.366322   31400 retry.go:31] will retry after 861.927469ms: waiting for machine to come up
	I1004 03:20:09.229974   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:09.230402   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:09.230431   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:09.230354   31400 retry.go:31] will retry after 766.03024ms: waiting for machine to come up
	I1004 03:20:09.997533   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:09.997938   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:09.997963   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:09.997907   31400 retry.go:31] will retry after 980.127757ms: waiting for machine to come up
	I1004 03:20:10.979306   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:10.979718   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:10.979743   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:10.979684   31400 retry.go:31] will retry after 1.544904084s: waiting for machine to come up
	I1004 03:20:12.525854   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:12.526225   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:12.526249   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:12.526177   31400 retry.go:31] will retry after 1.432028005s: waiting for machine to come up
	I1004 03:20:13.960907   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:13.961388   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:13.961415   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:13.961367   31400 retry.go:31] will retry after 1.927604807s: waiting for machine to come up
	I1004 03:20:15.890697   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:15.891148   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:15.891175   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:15.891091   31400 retry.go:31] will retry after 3.506356031s: waiting for machine to come up
	I1004 03:20:19.400810   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:19.401322   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:19.401349   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:19.401272   31400 retry.go:31] will retry after 3.367410839s: waiting for machine to come up
	I1004 03:20:22.769867   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:22.770373   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:22.770407   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:22.770302   31400 retry.go:31] will retry after 5.266869096s: waiting for machine to come up
	I1004 03:20:28.041532   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:28.041995   30630 main.go:141] libmachine: (ha-994751-m03) Found IP for machine: 192.168.39.53
	I1004 03:20:28.042014   30630 main.go:141] libmachine: (ha-994751-m03) Reserving static IP address...
	I1004 03:20:28.042026   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has current primary IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:28.042375   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find host DHCP lease matching {name: "ha-994751-m03", mac: "52:54:00:49:76:ea", ip: "192.168.39.53"} in network mk-ha-994751
	I1004 03:20:28.115076   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Getting to WaitForSSH function...
	I1004 03:20:28.115105   30630 main.go:141] libmachine: (ha-994751-m03) Reserved static IP address: 192.168.39.53
	I1004 03:20:28.115145   30630 main.go:141] libmachine: (ha-994751-m03) Waiting for SSH to be available...
	I1004 03:20:28.117390   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:28.117662   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751
	I1004 03:20:28.117678   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find defined IP address of network mk-ha-994751 interface with MAC address 52:54:00:49:76:ea
	I1004 03:20:28.117841   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH client type: external
	I1004 03:20:28.117866   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa (-rw-------)
	I1004 03:20:28.117909   30630 main.go:141] libmachine: (ha-994751-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:20:28.117924   30630 main.go:141] libmachine: (ha-994751-m03) DBG | About to run SSH command:
	I1004 03:20:28.117940   30630 main.go:141] libmachine: (ha-994751-m03) DBG | exit 0
	I1004 03:20:28.121632   30630 main.go:141] libmachine: (ha-994751-m03) DBG | SSH cmd err, output: exit status 255: 
	I1004 03:20:28.121657   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1004 03:20:28.121668   30630 main.go:141] libmachine: (ha-994751-m03) DBG | command : exit 0
	I1004 03:20:28.121677   30630 main.go:141] libmachine: (ha-994751-m03) DBG | err     : exit status 255
	I1004 03:20:28.121690   30630 main.go:141] libmachine: (ha-994751-m03) DBG | output  : 
	I1004 03:20:31.123157   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Getting to WaitForSSH function...
	I1004 03:20:31.125515   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.125954   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.125981   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.126121   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH client type: external
	I1004 03:20:31.126148   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa (-rw-------)
	I1004 03:20:31.126175   30630 main.go:141] libmachine: (ha-994751-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:20:31.126186   30630 main.go:141] libmachine: (ha-994751-m03) DBG | About to run SSH command:
	I1004 03:20:31.126199   30630 main.go:141] libmachine: (ha-994751-m03) DBG | exit 0
	I1004 03:20:31.255788   30630 main.go:141] libmachine: (ha-994751-m03) DBG | SSH cmd err, output: <nil>: 
	I1004 03:20:31.256048   30630 main.go:141] libmachine: (ha-994751-m03) KVM machine creation complete!
	I1004 03:20:31.256416   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetConfigRaw
	I1004 03:20:31.257001   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:31.257196   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:31.257537   30630 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 03:20:31.257552   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetState
	I1004 03:20:31.258954   30630 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 03:20:31.258966   30630 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 03:20:31.258972   30630 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 03:20:31.258978   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.261065   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.261407   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.261432   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.261523   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.261696   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.261827   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.261939   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.262104   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.262338   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.262354   30630 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 03:20:31.371392   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:20:31.371421   30630 main.go:141] libmachine: Detecting the provisioner...
	I1004 03:20:31.371431   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.374360   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.374677   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.374703   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.374874   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.375093   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.375299   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.375463   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.375637   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.375858   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.375873   30630 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 03:20:31.489043   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 03:20:31.489093   30630 main.go:141] libmachine: found compatible host: buildroot
	I1004 03:20:31.489100   30630 main.go:141] libmachine: Provisioning with buildroot...
	I1004 03:20:31.489107   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:31.489333   30630 buildroot.go:166] provisioning hostname "ha-994751-m03"
	I1004 03:20:31.489357   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:31.489534   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.492101   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.492553   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.492573   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.492727   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.492907   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.493039   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.493147   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.493277   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.493442   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.493453   30630 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-994751-m03 && echo "ha-994751-m03" | sudo tee /etc/hostname
	I1004 03:20:31.626029   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751-m03
	
	I1004 03:20:31.626058   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.628598   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.629032   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.629055   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.629247   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.629454   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.629599   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.629757   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.629901   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.630075   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.630108   30630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-994751-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-994751-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-994751-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:20:31.754855   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:20:31.754886   30630 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:20:31.754923   30630 buildroot.go:174] setting up certificates
	I1004 03:20:31.754934   30630 provision.go:84] configureAuth start
	I1004 03:20:31.754946   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:31.755194   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:31.757747   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.758065   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.758087   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.758193   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.760414   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.760746   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.760771   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.760844   30630 provision.go:143] copyHostCerts
	I1004 03:20:31.760875   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:20:31.760907   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:20:31.760915   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:20:31.760984   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:20:31.761064   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:20:31.761082   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:20:31.761088   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:20:31.761114   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:20:31.761166   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:20:31.761182   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:20:31.761188   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:20:31.761214   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:20:31.761271   30630 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.ha-994751-m03 san=[127.0.0.1 192.168.39.53 ha-994751-m03 localhost minikube]
	I1004 03:20:31.828214   30630 provision.go:177] copyRemoteCerts
	I1004 03:20:31.828263   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:20:31.828283   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.830707   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.831047   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.831078   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.831192   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.831375   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.831522   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.831636   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:31.917792   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:20:31.917859   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:20:31.943534   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:20:31.943606   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 03:20:31.968990   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:20:31.969060   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:20:31.992331   30630 provision.go:87] duration metric: took 237.384107ms to configureAuth
	I1004 03:20:31.992362   30630 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:20:31.992622   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:31.992738   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.995570   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.995946   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.995975   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.996126   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.996306   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.996434   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.996569   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.996677   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.996863   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.996880   30630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:20:32.229026   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:20:32.229061   30630 main.go:141] libmachine: Checking connection to Docker...
	I1004 03:20:32.229071   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetURL
	I1004 03:20:32.230237   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using libvirt version 6000000
	I1004 03:20:32.232533   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.232839   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.232870   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.233012   30630 main.go:141] libmachine: Docker is up and running!
	I1004 03:20:32.233029   30630 main.go:141] libmachine: Reticulating splines...
	I1004 03:20:32.233037   30630 client.go:171] duration metric: took 27.675822366s to LocalClient.Create
	I1004 03:20:32.233061   30630 start.go:167] duration metric: took 27.675885367s to libmachine.API.Create "ha-994751"
	I1004 03:20:32.233071   30630 start.go:293] postStartSetup for "ha-994751-m03" (driver="kvm2")
	I1004 03:20:32.233080   30630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:20:32.233096   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.233315   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:20:32.233341   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:32.235889   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.236270   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.236297   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.236452   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.236641   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.236787   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.236936   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:32.321827   30630 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:20:32.326129   30630 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:20:32.326152   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:20:32.326232   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:20:32.326328   30630 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:20:32.326339   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:20:32.326421   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:20:32.336376   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:20:32.359653   30630 start.go:296] duration metric: took 126.571809ms for postStartSetup
	I1004 03:20:32.359721   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetConfigRaw
	I1004 03:20:32.360268   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:32.362856   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.363243   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.363268   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.363469   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:20:32.363663   30630 start.go:128] duration metric: took 27.824325438s to createHost
	I1004 03:20:32.363686   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:32.365882   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.366210   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.366226   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.366350   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.366523   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.366674   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.366824   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.366985   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:32.367180   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:32.367194   30630 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:20:32.480703   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728012032.461011085
	
	I1004 03:20:32.480725   30630 fix.go:216] guest clock: 1728012032.461011085
	I1004 03:20:32.480735   30630 fix.go:229] Guest: 2024-10-04 03:20:32.461011085 +0000 UTC Remote: 2024-10-04 03:20:32.363675 +0000 UTC m=+146.676506004 (delta=97.336085ms)
	I1004 03:20:32.480753   30630 fix.go:200] guest clock delta is within tolerance: 97.336085ms
	I1004 03:20:32.480760   30630 start.go:83] releasing machines lock for "ha-994751-m03", held for 27.941569364s
	I1004 03:20:32.480780   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.480989   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:32.483796   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.484159   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.484191   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.486391   30630 out.go:177] * Found network options:
	I1004 03:20:32.487654   30630 out.go:177]   - NO_PROXY=192.168.39.65,192.168.39.117
	W1004 03:20:32.488913   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	W1004 03:20:32.488946   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:20:32.488964   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.489521   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.489776   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.489869   30630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:20:32.489906   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	W1004 03:20:32.489985   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	W1004 03:20:32.490009   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:20:32.490068   30630 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:20:32.490090   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:32.492646   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.492900   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.493125   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.493149   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.493245   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.493267   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.493334   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.493500   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.493556   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.493707   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.493736   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.493920   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:32.493987   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.494105   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:32.742057   30630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 03:20:32.749338   30630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:20:32.749392   30630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:20:32.765055   30630 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 03:20:32.765079   30630 start.go:495] detecting cgroup driver to use...
	I1004 03:20:32.765139   30630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:20:32.780546   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:20:32.797729   30630 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:20:32.797789   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:20:32.810917   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:20:32.823880   30630 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:20:32.941749   30630 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:20:33.094803   30630 docker.go:233] disabling docker service ...
	I1004 03:20:33.094875   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:20:33.108945   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:20:33.122238   30630 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:20:33.259499   30630 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:20:33.382162   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:20:33.399956   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:20:33.419077   30630 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:20:33.419147   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.431123   30630 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:20:33.431176   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.442393   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.454523   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.465583   30630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:20:33.477059   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.487953   30630 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.505077   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.515522   30630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:20:33.526537   30630 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 03:20:33.526592   30630 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 03:20:33.540307   30630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:20:33.550485   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:20:33.660459   30630 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:20:33.759769   30630 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:20:33.759862   30630 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:20:33.764677   30630 start.go:563] Will wait 60s for crictl version
	I1004 03:20:33.764728   30630 ssh_runner.go:195] Run: which crictl
	I1004 03:20:33.768748   30630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:20:33.815756   30630 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:20:33.815849   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:20:33.843604   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:20:33.875395   30630 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:20:33.876902   30630 out.go:177]   - env NO_PROXY=192.168.39.65
	I1004 03:20:33.878202   30630 out.go:177]   - env NO_PROXY=192.168.39.65,192.168.39.117
	I1004 03:20:33.879354   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:33.881763   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:33.882075   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:33.882116   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:33.882282   30630 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:20:33.887016   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:20:33.900617   30630 mustload.go:65] Loading cluster: ha-994751
	I1004 03:20:33.900859   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:33.901101   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:33.901139   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:33.916080   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40599
	I1004 03:20:33.916545   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:33.917019   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:33.917038   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:33.917311   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:33.917490   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:20:33.918758   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:20:33.919091   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:33.919127   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:33.934895   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I1004 03:20:33.935369   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:33.935847   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:33.935870   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:33.936191   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:33.936373   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:20:33.936519   30630 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751 for IP: 192.168.39.53
	I1004 03:20:33.936531   30630 certs.go:194] generating shared ca certs ...
	I1004 03:20:33.936550   30630 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:20:33.936692   30630 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:20:33.936742   30630 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:20:33.936754   30630 certs.go:256] generating profile certs ...
	I1004 03:20:33.936848   30630 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key
	I1004 03:20:33.936877   30630 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21
	I1004 03:20:33.936895   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65 192.168.39.117 192.168.39.53 192.168.39.254]
	I1004 03:20:34.019919   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21 ...
	I1004 03:20:34.019948   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21: {Name:mk35ee00bf994088c6b50391189f3e324fc0101b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:20:34.020103   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21 ...
	I1004 03:20:34.020114   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21: {Name:mk408ba3330d2e90d98d309cc86d9e5e670f9570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:20:34.020180   30630 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt
	I1004 03:20:34.020296   30630 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key
	I1004 03:20:34.020411   30630 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key
	I1004 03:20:34.020425   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:20:34.020438   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:20:34.020452   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:20:34.020465   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:20:34.020477   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:20:34.020489   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:20:34.020501   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:20:34.035820   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:20:34.035890   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:20:34.035926   30630 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:20:34.035946   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:20:34.035969   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:20:34.035990   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:20:34.036010   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:20:34.036045   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:20:34.036074   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.036087   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.036100   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.036130   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:20:34.039080   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:34.039469   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:20:34.039485   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:34.039662   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:20:34.039893   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:20:34.040036   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:20:34.040151   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:20:34.112207   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1004 03:20:34.117935   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1004 03:20:34.131114   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1004 03:20:34.136170   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1004 03:20:34.149066   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1004 03:20:34.153717   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1004 03:20:34.167750   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1004 03:20:34.172288   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1004 03:20:34.184761   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1004 03:20:34.189707   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1004 03:20:34.201792   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1004 03:20:34.206305   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1004 03:20:34.218091   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:20:34.243235   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:20:34.267642   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:20:34.291741   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:20:34.317056   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1004 03:20:34.340832   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 03:20:34.364951   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:20:34.392565   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:20:34.419461   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:20:34.444597   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:20:34.470026   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:20:34.495443   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1004 03:20:34.513085   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1004 03:20:34.530602   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1004 03:20:34.548064   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1004 03:20:34.565179   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1004 03:20:34.582199   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1004 03:20:34.599469   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1004 03:20:34.617008   30630 ssh_runner.go:195] Run: openssl version
	I1004 03:20:34.623238   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:20:34.635851   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.641242   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.641300   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.647354   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:20:34.660625   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:20:34.673563   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.678872   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.678918   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.685228   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:20:34.696965   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:20:34.708173   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.712666   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.712728   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.718347   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:20:34.729423   30630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:20:34.733599   30630 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 03:20:34.733645   30630 kubeadm.go:934] updating node {m03 192.168.39.53 8443 v1.31.1 crio true true} ...
	I1004 03:20:34.733734   30630 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-994751-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:20:34.733759   30630 kube-vip.go:115] generating kube-vip config ...
	I1004 03:20:34.733788   30630 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1004 03:20:34.753104   30630 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1004 03:20:34.753160   30630 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:20:34.753207   30630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:20:34.764605   30630 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1004 03:20:34.764653   30630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1004 03:20:34.776026   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1004 03:20:34.776058   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:20:34.776073   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1004 03:20:34.776077   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1004 03:20:34.776094   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:20:34.776111   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:20:34.776123   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:20:34.776154   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:20:34.784508   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1004 03:20:34.784532   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1004 03:20:34.784546   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1004 03:20:34.784554   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1004 03:20:34.816412   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:20:34.816537   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:20:34.932259   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1004 03:20:34.932304   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1004 03:20:35.665849   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1004 03:20:35.676114   30630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1004 03:20:35.694028   30630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:20:35.718864   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1004 03:20:35.736291   30630 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:20:35.740907   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:20:35.753115   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:20:35.870874   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:20:35.888175   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:20:35.888614   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:35.888675   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:35.903712   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33367
	I1004 03:20:35.904202   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:35.904676   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:35.904700   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:35.904994   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:35.905194   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:20:35.905357   30630 start.go:317] joinCluster: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:20:35.905474   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1004 03:20:35.905495   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:20:35.908275   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:35.908713   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:20:35.908739   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:35.908875   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:20:35.909047   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:20:35.909173   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:20:35.909303   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:20:36.083592   30630 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:20:36.083645   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e5abq7.epvk18yjfmjj0i7x --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I1004 03:20:57.688048   30630 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e5abq7.epvk18yjfmjj0i7x --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (21.604380186s)
	I1004 03:20:57.688081   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1004 03:20:58.272843   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-994751-m03 minikube.k8s.io/updated_at=2024_10_04T03_20_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-994751 minikube.k8s.io/primary=false
	I1004 03:20:58.405355   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-994751-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1004 03:20:58.529681   30630 start.go:319] duration metric: took 22.624319783s to joinCluster
	I1004 03:20:58.529762   30630 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:20:58.530014   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:58.531345   30630 out.go:177] * Verifying Kubernetes components...
	I1004 03:20:58.532710   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:20:58.800802   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:20:58.844203   30630 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:20:58.844571   30630 kapi.go:59] client config for ha-994751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key", CAFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1004 03:20:58.844645   30630 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.65:8443
	I1004 03:20:58.844892   30630 node_ready.go:35] waiting up to 6m0s for node "ha-994751-m03" to be "Ready" ...
	I1004 03:20:58.844972   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:20:58.844982   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:58.844998   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:58.845007   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:58.848088   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:59.345094   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:20:59.345120   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:59.345130   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:59.345135   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:59.353141   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:20:59.845733   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:20:59.845805   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:59.845823   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:59.845832   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:59.850171   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:00.345129   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:00.345150   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:00.345159   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:00.345163   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:00.348609   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:00.845173   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:00.845196   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:00.845205   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:00.845210   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:00.850207   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:00.851383   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:01.345051   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:01.345072   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:01.345079   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:01.345083   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:01.349207   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:01.845336   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:01.845357   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:01.845364   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:01.845369   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:01.848367   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:02.345495   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:02.345521   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:02.345529   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:02.345534   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:02.349838   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:02.845704   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:02.845732   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:02.845745   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:02.845752   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:02.849074   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:03.345450   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:03.345472   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:03.345480   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:03.345484   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:03.349082   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:03.349671   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:03.846035   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:03.846061   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:03.846072   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:03.846079   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:03.850455   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:04.345156   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:04.345183   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:04.345191   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:04.345196   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:04.349346   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:04.845676   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:04.845695   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:04.845702   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:04.845707   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:04.849977   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:05.345993   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:05.346019   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:05.346028   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:05.346032   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:05.350487   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:05.352077   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:05.845454   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:05.845473   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:05.845486   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:05.845493   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:05.848902   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:06.345394   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:06.345416   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:06.345424   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:06.345428   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:06.348963   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:06.846045   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:06.846066   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:06.846077   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:06.846084   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:06.849291   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:07.345224   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:07.345249   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:07.345258   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:07.345261   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:07.348950   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:07.845773   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:07.845797   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:07.845807   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:07.845812   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:07.853790   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:21:07.854460   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:08.345396   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:08.345417   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:08.345425   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:08.345430   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:08.348967   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:08.845960   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:08.845987   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:08.845998   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:08.846004   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:08.849592   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:09.345163   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:09.345187   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:09.345195   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:09.345199   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:09.348412   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:09.845700   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:09.845720   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:09.845727   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:09.845732   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:09.848850   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:10.346002   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:10.346024   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:10.346036   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:10.346041   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:10.349778   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:10.350421   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:10.845273   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:10.845342   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:10.845357   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:10.845364   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:10.849249   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:11.345450   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:11.345474   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:11.345485   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:11.345490   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:11.348615   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:11.845521   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:11.845544   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:11.845552   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:11.845557   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:11.851020   30630 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 03:21:12.345427   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:12.345455   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:12.345466   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:12.345473   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:12.348894   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:12.845773   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:12.845807   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:12.845815   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:12.845821   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:12.849096   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:12.849859   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:13.345600   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:13.345625   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:13.345635   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:13.345641   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:13.348986   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:13.845088   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:13.845115   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:13.845122   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:13.845126   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:13.848813   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:14.345772   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:14.345796   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:14.345804   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:14.345809   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:14.349538   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:14.845967   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:14.845999   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:14.846010   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:14.846015   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:14.849646   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:14.850106   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:15.345479   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:15.345501   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:15.345509   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:15.345514   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:15.348633   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:15.845308   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:15.845329   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:15.845337   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:15.845342   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:15.848613   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.345615   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:16.345635   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.345697   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.345709   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.349189   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.845211   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:16.845234   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.845243   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.845247   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.848314   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.848965   30630 node_ready.go:49] node "ha-994751-m03" has status "Ready":"True"
	I1004 03:21:16.848983   30630 node_ready.go:38] duration metric: took 18.004075427s for node "ha-994751-m03" to be "Ready" ...
	I1004 03:21:16.848993   30630 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:21:16.849057   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:16.849066   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.849073   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.849077   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.855878   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:21:16.863339   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.863413   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-l6zst
	I1004 03:21:16.863421   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.863428   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.863432   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.866627   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.867225   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:16.867246   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.867254   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.867257   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.869745   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.870174   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.870189   30630 pod_ready.go:82] duration metric: took 6.828744ms for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.870197   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.870257   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgdck
	I1004 03:21:16.870266   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.870272   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.870277   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.872665   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.873280   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:16.873293   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.873300   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.873304   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.875767   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.876277   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.876299   30630 pod_ready.go:82] duration metric: took 6.094854ms for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.876312   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.876381   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751
	I1004 03:21:16.876394   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.876405   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.876415   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.878641   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.879297   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:16.879315   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.879323   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.879330   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.881505   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.881911   30630 pod_ready.go:93] pod "etcd-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.881925   30630 pod_ready.go:82] duration metric: took 5.606429ms for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.881933   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.881973   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m02
	I1004 03:21:16.881980   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.881986   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.881991   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.884217   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.884882   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:16.884896   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.884903   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.884907   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.887109   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.887576   30630 pod_ready.go:93] pod "etcd-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.887592   30630 pod_ready.go:82] duration metric: took 5.65336ms for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.887600   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.046004   30630 request.go:632] Waited for 158.354973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m03
	I1004 03:21:17.046081   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m03
	I1004 03:21:17.046092   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.046103   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.046113   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.049599   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.245822   30630 request.go:632] Waited for 195.387196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:17.245913   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:17.245920   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.245929   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.245937   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.249684   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.250373   30630 pod_ready.go:93] pod "etcd-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:17.250391   30630 pod_ready.go:82] duration metric: took 362.785163ms for pod "etcd-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.250406   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.445530   30630 request.go:632] Waited for 195.055856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:21:17.445608   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:21:17.445614   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.445621   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.445627   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.449209   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.645177   30630 request.go:632] Waited for 195.266127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:17.645277   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:17.645290   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.645300   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.645307   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.648339   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.648978   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:17.648997   30630 pod_ready.go:82] duration metric: took 398.583614ms for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.649010   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.845996   30630 request.go:632] Waited for 196.900731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:21:17.846073   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:21:17.846082   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.846092   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.846097   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.849729   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.045771   30630 request.go:632] Waited for 195.364695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:18.045824   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:18.045829   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.045837   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.045843   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.049741   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.050457   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:18.050479   30630 pod_ready.go:82] duration metric: took 401.458645ms for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.050491   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.245708   30630 request.go:632] Waited for 195.123371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m03
	I1004 03:21:18.245779   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m03
	I1004 03:21:18.245788   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.245798   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.245805   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.248803   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:18.445802   30630 request.go:632] Waited for 196.359557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:18.445880   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:18.445891   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.445903   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.445912   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.449153   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.449859   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:18.449875   30630 pod_ready.go:82] duration metric: took 399.376745ms for pod "kube-apiserver-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.449884   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.646109   30630 request.go:632] Waited for 196.148252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:21:18.646174   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:21:18.646181   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.646190   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.646196   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.649910   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.845959   30630 request.go:632] Waited for 195.355273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:18.846052   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:18.846066   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.846077   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.846084   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.849452   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.849983   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:18.849999   30630 pod_ready.go:82] duration metric: took 400.109282ms for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.850007   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.045892   30630 request.go:632] Waited for 195.812536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:21:19.045949   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:21:19.045954   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.045962   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.045965   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.049481   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.245703   30630 request.go:632] Waited for 195.37604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:19.245795   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:19.245807   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.245816   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.245821   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.249221   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.249770   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:19.249786   30630 pod_ready.go:82] duration metric: took 399.773598ms for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.249797   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.445959   30630 request.go:632] Waited for 196.084722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m03
	I1004 03:21:19.446017   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m03
	I1004 03:21:19.446023   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.446030   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.446034   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.449595   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.646055   30630 request.go:632] Waited for 195.452676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:19.646103   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:19.646110   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.646121   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.646126   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.649308   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.649980   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:19.650000   30630 pod_ready.go:82] duration metric: took 400.193489ms for pod "kube-controller-manager-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.650010   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9q6q2" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.846046   30630 request.go:632] Waited for 195.979747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q6q2
	I1004 03:21:19.846103   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q6q2
	I1004 03:21:19.846109   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.846116   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.846121   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.850032   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.045346   30630 request.go:632] Waited for 194.290233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:20.045412   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:20.045419   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.045429   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.045435   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.049187   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.049735   30630 pod_ready.go:93] pod "kube-proxy-9q6q2" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:20.049758   30630 pod_ready.go:82] duration metric: took 399.740576ms for pod "kube-proxy-9q6q2" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.049773   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.245829   30630 request.go:632] Waited for 195.994651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:21:20.245916   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:21:20.245926   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.245933   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.245938   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.248898   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:20.445831   30630 request.go:632] Waited for 196.355752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:20.445904   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:20.445910   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.445921   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.445925   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.449843   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.450548   30630 pod_ready.go:93] pod "kube-proxy-f44b9" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:20.450575   30630 pod_ready.go:82] duration metric: took 400.789271ms for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.450587   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.645991   30630 request.go:632] Waited for 195.320241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:21:20.646051   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:21:20.646061   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.646072   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.646084   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.649526   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.845351   30630 request.go:632] Waited for 195.084601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:20.845415   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:20.845423   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.845433   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.845439   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.849107   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.849683   30630 pod_ready.go:93] pod "kube-proxy-ph6cf" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:20.849702   30630 pod_ready.go:82] duration metric: took 399.106228ms for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.849714   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.046211   30630 request.go:632] Waited for 196.431281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:21:21.046274   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:21:21.046287   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.046297   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.046303   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.049644   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.245652   30630 request.go:632] Waited for 195.357611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:21.245701   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:21.245707   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.245717   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.245729   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.248937   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.249459   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:21.249477   30630 pod_ready.go:82] duration metric: took 399.754955ms for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.249485   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.445624   30630 request.go:632] Waited for 196.058326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:21:21.445695   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:21:21.445700   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.445708   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.445713   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.449658   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.645861   30630 request.go:632] Waited for 195.383024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:21.645947   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:21.645959   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.646444   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.646457   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.649535   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.650129   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:21.650145   30630 pod_ready.go:82] duration metric: took 400.653773ms for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.650155   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.846280   30630 request.go:632] Waited for 196.044885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m03
	I1004 03:21:21.846336   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m03
	I1004 03:21:21.846342   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.846349   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.846354   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.849713   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:22.045755   30630 request.go:632] Waited for 195.414064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:22.045827   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:22.045834   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.045841   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.045847   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.049538   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:22.050359   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:22.050378   30630 pod_ready.go:82] duration metric: took 400.213357ms for pod "kube-scheduler-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:22.050389   30630 pod_ready.go:39] duration metric: took 5.201387664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:21:22.050412   30630 api_server.go:52] waiting for apiserver process to appear ...
	I1004 03:21:22.050477   30630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:21:22.066998   30630 api_server.go:72] duration metric: took 23.53720299s to wait for apiserver process to appear ...
	I1004 03:21:22.067023   30630 api_server.go:88] waiting for apiserver healthz status ...
	I1004 03:21:22.067042   30630 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I1004 03:21:22.074791   30630 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I1004 03:21:22.074864   30630 round_trippers.go:463] GET https://192.168.39.65:8443/version
	I1004 03:21:22.074872   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.074885   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.074896   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.075865   30630 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1004 03:21:22.075921   30630 api_server.go:141] control plane version: v1.31.1
	I1004 03:21:22.075934   30630 api_server.go:131] duration metric: took 8.905409ms to wait for apiserver health ...
	I1004 03:21:22.075941   30630 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 03:21:22.245389   30630 request.go:632] Waited for 169.386949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.245481   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.245490   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.245505   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.245516   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.251617   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:21:22.258944   30630 system_pods.go:59] 24 kube-system pods found
	I1004 03:21:22.258969   30630 system_pods.go:61] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:21:22.258974   30630 system_pods.go:61] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:21:22.258980   30630 system_pods.go:61] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:21:22.258984   30630 system_pods.go:61] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:21:22.258987   30630 system_pods.go:61] "etcd-ha-994751-m03" [610c4e0c-9af8-441e-9524-ccd6fe6fe390] Running
	I1004 03:21:22.258990   30630 system_pods.go:61] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:21:22.258992   30630 system_pods.go:61] "kindnet-clt5p" [a904ebc8-f149-4b9f-9637-a37cb56af836] Running
	I1004 03:21:22.258994   30630 system_pods.go:61] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:21:22.258997   30630 system_pods.go:61] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:21:22.259012   30630 system_pods.go:61] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:21:22.259017   30630 system_pods.go:61] "kube-apiserver-ha-994751-m03" [42150ae1-b298-4974-976f-05e9a2a32154] Running
	I1004 03:21:22.259020   30630 system_pods.go:61] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:21:22.259023   30630 system_pods.go:61] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:21:22.259027   30630 system_pods.go:61] "kube-controller-manager-ha-994751-m03" [5897468d-7872-4fed-81bc-bf9b37e42ef4] Running
	I1004 03:21:22.259030   30630 system_pods.go:61] "kube-proxy-9q6q2" [a3b96ca0-fe8c-4492-a05c-5f8ff9cb8d3f] Running
	I1004 03:21:22.259033   30630 system_pods.go:61] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:21:22.259036   30630 system_pods.go:61] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:21:22.259039   30630 system_pods.go:61] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:21:22.259042   30630 system_pods.go:61] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:21:22.259046   30630 system_pods.go:61] "kube-scheduler-ha-994751-m03" [f53fda60-a075-4f78-a64b-52e960a4b28b] Running
	I1004 03:21:22.259048   30630 system_pods.go:61] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:21:22.259051   30630 system_pods.go:61] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:21:22.259054   30630 system_pods.go:61] "kube-vip-ha-994751-m03" [9ec22347-f3d6-419e-867a-0de177976203] Running
	I1004 03:21:22.259056   30630 system_pods.go:61] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:21:22.259062   30630 system_pods.go:74] duration metric: took 183.116626ms to wait for pod list to return data ...
	I1004 03:21:22.259072   30630 default_sa.go:34] waiting for default service account to be created ...
	I1004 03:21:22.445504   30630 request.go:632] Waited for 186.355323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:21:22.445557   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:21:22.445563   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.445570   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.445575   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.449437   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:22.449567   30630 default_sa.go:45] found service account: "default"
	I1004 03:21:22.449589   30630 default_sa.go:55] duration metric: took 190.510625ms for default service account to be created ...
	I1004 03:21:22.449599   30630 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 03:21:22.646023   30630 request.go:632] Waited for 196.345892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.646077   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.646096   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.646106   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.646115   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.652169   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:21:22.660351   30630 system_pods.go:86] 24 kube-system pods found
	I1004 03:21:22.660376   30630 system_pods.go:89] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:21:22.660386   30630 system_pods.go:89] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:21:22.660391   30630 system_pods.go:89] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:21:22.660395   30630 system_pods.go:89] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:21:22.660398   30630 system_pods.go:89] "etcd-ha-994751-m03" [610c4e0c-9af8-441e-9524-ccd6fe6fe390] Running
	I1004 03:21:22.660402   30630 system_pods.go:89] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:21:22.660405   30630 system_pods.go:89] "kindnet-clt5p" [a904ebc8-f149-4b9f-9637-a37cb56af836] Running
	I1004 03:21:22.660408   30630 system_pods.go:89] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:21:22.660412   30630 system_pods.go:89] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:21:22.660416   30630 system_pods.go:89] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:21:22.660419   30630 system_pods.go:89] "kube-apiserver-ha-994751-m03" [42150ae1-b298-4974-976f-05e9a2a32154] Running
	I1004 03:21:22.660423   30630 system_pods.go:89] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:21:22.660426   30630 system_pods.go:89] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:21:22.660432   30630 system_pods.go:89] "kube-controller-manager-ha-994751-m03" [5897468d-7872-4fed-81bc-bf9b37e42ef4] Running
	I1004 03:21:22.660437   30630 system_pods.go:89] "kube-proxy-9q6q2" [a3b96ca0-fe8c-4492-a05c-5f8ff9cb8d3f] Running
	I1004 03:21:22.660440   30630 system_pods.go:89] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:21:22.660443   30630 system_pods.go:89] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:21:22.660450   30630 system_pods.go:89] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:21:22.660453   30630 system_pods.go:89] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:21:22.660456   30630 system_pods.go:89] "kube-scheduler-ha-994751-m03" [f53fda60-a075-4f78-a64b-52e960a4b28b] Running
	I1004 03:21:22.660465   30630 system_pods.go:89] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:21:22.660470   30630 system_pods.go:89] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:21:22.660473   30630 system_pods.go:89] "kube-vip-ha-994751-m03" [9ec22347-f3d6-419e-867a-0de177976203] Running
	I1004 03:21:22.660476   30630 system_pods.go:89] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:21:22.660481   30630 system_pods.go:126] duration metric: took 210.876444ms to wait for k8s-apps to be running ...
	I1004 03:21:22.660493   30630 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 03:21:22.660540   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:21:22.675933   30630 system_svc.go:56] duration metric: took 15.434198ms WaitForService to wait for kubelet
	I1004 03:21:22.675957   30630 kubeadm.go:582] duration metric: took 24.146164676s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:21:22.675972   30630 node_conditions.go:102] verifying NodePressure condition ...
	I1004 03:21:22.845860   30630 request.go:632] Waited for 169.820621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes
	I1004 03:21:22.845932   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes
	I1004 03:21:22.845941   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.845948   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.845959   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.850058   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:22.851493   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:21:22.851511   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:21:22.851521   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:21:22.851525   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:21:22.851529   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:21:22.851534   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:21:22.851538   30630 node_conditions.go:105] duration metric: took 175.561582ms to run NodePressure ...
	I1004 03:21:22.851551   30630 start.go:241] waiting for startup goroutines ...
	I1004 03:21:22.851569   30630 start.go:255] writing updated cluster config ...
	I1004 03:21:22.851861   30630 ssh_runner.go:195] Run: rm -f paused
	I1004 03:21:22.904494   30630 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 03:21:22.906685   30630 out.go:177] * Done! kubectl is now configured to use "ha-994751" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.034784649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012311034759739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19f1b59c-105a-4bc1-9b1b-5bde54bd03ce name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.035366573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e94f1d02-cc90-47d1-a68c-8f7d7fd32c5a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.035449923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e94f1d02-cc90-47d1-a68c-8f7d7fd32c5a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.035687877Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e94f1d02-cc90-47d1-a68c-8f7d7fd32c5a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.081376629Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f58bf44b-761e-4217-ba33-4690948a5d6c name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.081469875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f58bf44b-761e-4217-ba33-4690948a5d6c name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.083160487Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9101560-2fec-45f5-8ee1-69cf81d292b4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.083604922Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012311083581731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9101560-2fec-45f5-8ee1-69cf81d292b4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.084186884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=453037b7-a326-466b-a462-3541952de1d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.084293494Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=453037b7-a326-466b-a462-3541952de1d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.084880641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=453037b7-a326-466b-a462-3541952de1d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.126339354Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=827733c2-9163-4a81-af06-d83c91b21a55 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.126418826Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=827733c2-9163-4a81-af06-d83c91b21a55 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.128331452Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5cd80226-fbaf-44bc-90de-625aca6d87d0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.128732287Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012311128710486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5cd80226-fbaf-44bc-90de-625aca6d87d0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.129736231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da4b4a89-39be-477f-8d59-5dbd7466bac4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.129826403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da4b4a89-39be-477f-8d59-5dbd7466bac4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.130734035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da4b4a89-39be-477f-8d59-5dbd7466bac4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.175873881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9740d793-38d2-4829-86be-2a8f554bc857 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.176008631Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9740d793-38d2-4829-86be-2a8f554bc857 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.177402341Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6bc5d33a-ccd7-4c21-95a1-da0c724abba1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.177854333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012311177826818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bc5d33a-ccd7-4c21-95a1-da0c724abba1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.178841962Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2e2736e-9368-4dd0-ba49-3749d45392c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.178972201Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2e2736e-9368-4dd0-ba49-3749d45392c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:11 ha-994751 crio[664]: time="2024-10-04 03:25:11.179291104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2e2736e-9368-4dd0-ba49-3749d45392c4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9dd8849f48bb1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   21e8386b77b62       busybox-7dff88458-vh5j6
	2fe1e8ec5dfe4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   dab235bc541ca       storage-provisioner
	eb082a979b36c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   be9b34d6ca0bf       coredns-7c65d6cfc9-zgdck
	93aa8fd39f9c0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   d9a5ca3b325fa       coredns-7c65d6cfc9-l6zst
	6a3f40105608f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   454652c11f4fe       kindnet-2mhh2
	731622c5caa6f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   44f2b282edd57       kube-proxy-f44b9
	8830f0c28d759       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   5461b35eef9c3       kube-vip-ha-994751
	e49d081b73667       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   0372e9d489f05       kube-scheduler-ha-994751
	f5568cb7839e2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   c61920ab308f6       etcd-ha-994751
	849282c506754       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   6d7ea048eea90       kube-apiserver-ha-994751
	f041d718c872f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   8c1c0f1b1a430       kube-controller-manager-ha-994751
	
	
	==> coredns [93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd] <==
	[INFO] 10.244.2.2:42178 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.010745169s
	[INFO] 10.244.2.2:34829 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009009564s
	[INFO] 10.244.0.4:43910 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001485572s
	[INFO] 10.244.1.2:45378 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000181404s
	[INFO] 10.244.1.2:40886 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001942971s
	[INFO] 10.244.2.2:45461 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217787s
	[INFO] 10.244.2.2:56545 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000167289s
	[INFO] 10.244.2.2:52063 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000246892s
	[INFO] 10.244.0.4:48765 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150103s
	[INFO] 10.244.1.2:53871 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168625s
	[INFO] 10.244.1.2:58325 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001736755s
	[INFO] 10.244.1.2:38700 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085818s
	[INFO] 10.244.2.2:53525 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016163s
	[INFO] 10.244.2.2:55339 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126355s
	[INFO] 10.244.0.4:33506 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176834s
	[INFO] 10.244.0.4:47714 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136674s
	[INFO] 10.244.0.4:49593 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139876s
	[INFO] 10.244.1.2:51243 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137889s
	[INFO] 10.244.2.2:56043 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000221873s
	[INFO] 10.244.2.2:35783 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138959s
	[INFO] 10.244.0.4:37503 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013937s
	[INFO] 10.244.0.4:46310 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132408s
	[INFO] 10.244.0.4:35014 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074557s
	[INFO] 10.244.1.2:51803 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153481s
	[INFO] 10.244.1.2:47758 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000198394s
	
	
	==> coredns [eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586] <==
	[INFO] 10.244.2.2:43924 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.01283325s
	[INFO] 10.244.2.2:35798 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148903s
	[INFO] 10.244.0.4:59562 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140549s
	[INFO] 10.244.0.4:41362 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002209213s
	[INFO] 10.244.0.4:41786 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133758s
	[INFO] 10.244.0.4:49269 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001539557s
	[INFO] 10.244.0.4:56941 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018736s
	[INFO] 10.244.0.4:47984 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173422s
	[INFO] 10.244.0.4:41970 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061431s
	[INFO] 10.244.1.2:32918 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119893s
	[INFO] 10.244.1.2:39792 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093113s
	[INFO] 10.244.1.2:41331 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001259323s
	[INFO] 10.244.1.2:45464 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106483s
	[INFO] 10.244.1.2:35852 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153198s
	[INFO] 10.244.2.2:38240 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114031s
	[INFO] 10.244.2.2:54004 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008059s
	[INFO] 10.244.0.4:39542 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092418s
	[INFO] 10.244.1.2:41262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166812s
	[INFO] 10.244.1.2:55889 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146278s
	[INFO] 10.244.1.2:35654 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131643s
	[INFO] 10.244.2.2:37029 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012813s
	[INFO] 10.244.2.2:33774 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000223324s
	[INFO] 10.244.0.4:38911 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138291s
	[INFO] 10.244.1.2:56619 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093621s
	[INFO] 10.244.1.2:33622 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154511s
	
	
	==> describe nodes <==
	Name:               ha-994751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T03_18_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:18:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:18:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:18:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:18:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:19:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    ha-994751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7452b105a68246eeb61757acefd7f693
	  System UUID:                7452b105-a682-46ee-b617-57acefd7f693
	  Boot ID:                    aecf415c-e5c2-46a9-81d5-d95311218d51
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vh5j6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 coredns-7c65d6cfc9-l6zst             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m19s
	  kube-system                 coredns-7c65d6cfc9-zgdck             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m19s
	  kube-system                 etcd-ha-994751                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m24s
	  kube-system                 kindnet-2mhh2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m19s
	  kube-system                 kube-apiserver-ha-994751             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-controller-manager-ha-994751    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-proxy-f44b9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-scheduler-ha-994751             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-vip-ha-994751                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m16s  kube-proxy       
	  Normal  Starting                 6m24s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m24s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m24s  kubelet          Node ha-994751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s  kubelet          Node ha-994751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m24s  kubelet          Node ha-994751 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m20s  node-controller  Node ha-994751 event: Registered Node ha-994751 in Controller
	  Normal  NodeReady                6m5s   kubelet          Node ha-994751 status is now: NodeReady
	  Normal  RegisteredNode           5m24s  node-controller  Node ha-994751 event: Registered Node ha-994751 in Controller
	  Normal  RegisteredNode           4m8s   node-controller  Node ha-994751 event: Registered Node ha-994751 in Controller
	
	
	Name:               ha-994751-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_04T03_19_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:19:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:22:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    ha-994751-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6683e6a9e1244f787f84f2a5c1bf490
	  System UUID:                f6683e6a-9e12-44f7-87f8-4f2a5c1bf490
	  Boot ID:                    8b02ddc0-820d-4de5-b649-7e2202f66ea5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wc5kg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 etcd-ha-994751-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m30s
	  kube-system                 kindnet-rmcvt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m32s
	  kube-system                 kube-apiserver-ha-994751-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-controller-manager-ha-994751-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-proxy-ph6cf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-scheduler-ha-994751-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-vip-ha-994751-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m27s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m32s (x8 over 5m32s)  kubelet          Node ha-994751-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m32s (x8 over 5m32s)  kubelet          Node ha-994751-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m32s (x7 over 5m32s)  kubelet          Node ha-994751-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m30s                  node-controller  Node ha-994751-m02 event: Registered Node ha-994751-m02 in Controller
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-994751-m02 event: Registered Node ha-994751-m02 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-994751-m02 event: Registered Node ha-994751-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-994751-m02 status is now: NodeNotReady
	
	
	Name:               ha-994751-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_04T03_20_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:20:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:20:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:20:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:20:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:21:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-994751-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 df18b27d8a2e4c8893a601b97ec7e8e0
	  System UUID:                df18b27d-8a2e-4c88-93a6-01b97ec7e8e0
	  Boot ID:                    138aa962-c7a2-47ea-82c1-2a5ccfbc3de0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nrdqk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 etcd-ha-994751-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m15s
	  kube-system                 kindnet-clt5p                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m17s
	  kube-system                 kube-apiserver-ha-994751-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-controller-manager-ha-994751-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-proxy-9q6q2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-scheduler-ha-994751-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-vip-ha-994751-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m17s (x8 over 4m17s)  kubelet          Node ha-994751-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s (x8 over 4m17s)  kubelet          Node ha-994751-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s (x7 over 4m17s)  kubelet          Node ha-994751-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-994751-m03 event: Registered Node ha-994751-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-994751-m03 event: Registered Node ha-994751-m03 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-994751-m03 event: Registered Node ha-994751-m03 in Controller
	
	
	Name:               ha-994751-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_04T03_22_03_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:22:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-994751-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d61802e745d4414c8e0a1c3e5c1319f7
	  System UUID:                d61802e7-45d4-414c-8e0a-1c3e5c1319f7
	  Boot ID:                    f154d01f-d315-40b5-84e6-0d0b669735cf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-sggz9       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m8s
	  kube-system                 kube-proxy-xsz4w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m2s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  3m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m8s                 node-controller  Node ha-994751-m04 event: Registered Node ha-994751-m04 in Controller
	  Normal  RegisteredNode           3m8s                 node-controller  Node ha-994751-m04 event: Registered Node ha-994751-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m8s (x2 over 3m9s)  kubelet          Node ha-994751-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x2 over 3m9s)  kubelet          Node ha-994751-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x2 over 3m9s)  kubelet          Node ha-994751-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-994751-m04 event: Registered Node ha-994751-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-994751-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 4 03:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050646] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040240] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.800548] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.470270] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.581508] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.982603] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.059297] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061306] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.198058] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.129574] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.276832] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.888308] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +3.806908] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.054958] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.117103] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.085956] kauditd_printk_skb: 79 callbacks suppressed
	[  +7.063470] kauditd_printk_skb: 21 callbacks suppressed
	[Oct 4 03:19] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.285701] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec] <==
	{"level":"warn","ts":"2024-10-04T03:25:11.374288Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.384172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.440063Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.449036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.452748Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.465047Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.473865Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.474134Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.481155Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.485338Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.489110Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.498002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.504666Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.512180Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.523895Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.530205Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.537055Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.544091Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.551158Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.555351Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.559536Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.563695Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.570819Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.574063Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:11.577515Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 03:25:11 up 7 min,  0 users,  load average: 0.13, 0.16, 0.09
	Linux ha-994751 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99] <==
	I1004 03:24:35.996569       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	I1004 03:24:45.999760       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1004 03:24:45.999899       1 main.go:299] handling current node
	I1004 03:24:46.000028       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I1004 03:24:46.000107       1 main.go:322] Node ha-994751-m02 has CIDR [10.244.1.0/24] 
	I1004 03:24:46.000367       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1004 03:24:46.000422       1 main.go:322] Node ha-994751-m03 has CIDR [10.244.2.0/24] 
	I1004 03:24:46.000525       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I1004 03:24:46.000568       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	I1004 03:24:55.996427       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1004 03:24:55.996581       1 main.go:299] handling current node
	I1004 03:24:55.996609       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I1004 03:24:55.996628       1 main.go:322] Node ha-994751-m02 has CIDR [10.244.1.0/24] 
	I1004 03:24:55.996891       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1004 03:24:55.997045       1 main.go:322] Node ha-994751-m03 has CIDR [10.244.2.0/24] 
	I1004 03:24:55.997190       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I1004 03:24:55.997280       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	I1004 03:25:05.999244       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I1004 03:25:05.999341       1 main.go:322] Node ha-994751-m02 has CIDR [10.244.1.0/24] 
	I1004 03:25:05.999525       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1004 03:25:05.999565       1 main.go:322] Node ha-994751-m03 has CIDR [10.244.2.0/24] 
	I1004 03:25:05.999630       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I1004 03:25:05.999660       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	I1004 03:25:05.999742       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1004 03:25:05.999771       1 main.go:299] handling current node
	
	
	==> kube-apiserver [849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe] <==
	I1004 03:18:46.533293       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:18:46.536324       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 03:18:46.567509       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.65]
	I1004 03:18:46.569728       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:18:46.579199       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:18:47.324394       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:18:47.342483       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 03:18:47.354293       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:18:52.030260       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:18:52.131882       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1004 03:21:29.605335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53690: use of closed network connection
	E1004 03:21:29.795618       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53702: use of closed network connection
	E1004 03:21:29.974284       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53722: use of closed network connection
	E1004 03:21:30.184885       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53734: use of closed network connection
	E1004 03:21:30.399362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53748: use of closed network connection
	E1004 03:21:30.586499       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53770: use of closed network connection
	E1004 03:21:30.773657       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53776: use of closed network connection
	E1004 03:21:30.946921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53796: use of closed network connection
	E1004 03:21:31.140751       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53812: use of closed network connection
	E1004 03:21:31.439406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53848: use of closed network connection
	E1004 03:21:31.610289       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53874: use of closed network connection
	E1004 03:21:31.791527       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53896: use of closed network connection
	E1004 03:21:31.973829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53924: use of closed network connection
	E1004 03:21:32.157183       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53938: use of closed network connection
	E1004 03:21:32.326553       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53952: use of closed network connection
	
	
	==> kube-controller-manager [f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8] <==
	I1004 03:22:03.059069       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-994751-m04" podCIDRs=["10.244.3.0/24"]
	I1004 03:22:03.059118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.061876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.076574       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.137039       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.276697       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.662795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.977537       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:04.044472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:06.344839       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:06.345923       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-994751-m04"
	I1004 03:22:06.383881       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:13.412719       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:24.487665       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-994751-m04"
	I1004 03:22:24.487754       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:24.502742       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:26.362397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:33.863379       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:23:24.007837       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-994751-m04"
	I1004 03:23:24.008551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	I1004 03:23:24.038687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	I1004 03:23:24.187288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.759379ms"
	I1004 03:23:24.187415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.69µs"
	I1004 03:23:26.454826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	I1004 03:23:29.201808       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	
	
	==> kube-proxy [731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:18:54.520708       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:18:54.543515       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.65"]
	E1004 03:18:54.543642       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:18:54.585531       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:18:54.585592       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:18:54.585623       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:18:54.595069       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:18:54.598246       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:18:54.598343       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:18:54.602801       1 config.go:199] "Starting service config controller"
	I1004 03:18:54.603172       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:18:54.603521       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:18:54.603587       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:18:54.607605       1 config.go:328] "Starting node config controller"
	I1004 03:18:54.607621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:18:54.704654       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:18:54.704732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:18:54.707708       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec] <==
	W1004 03:18:45.760588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:18:45.760709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:18:45.902575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:18:45.902704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:18:45.937221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:18:45.937512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:18:46.030883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 03:18:46.031049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1004 03:18:48.095287       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1004 03:22:03.109132       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zh45q\": pod kindnet-zh45q is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zh45q" node="ha-994751-m04"
	E1004 03:22:03.113875       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cc0c3789-7dca-4ede-a355-9ac6d9db68c2(kube-system/kindnet-zh45q) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-zh45q"
	E1004 03:22:03.114052       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zh45q\": pod kindnet-zh45q is already assigned to node \"ha-994751-m04\"" pod="kube-system/kindnet-zh45q"
	I1004 03:22:03.114143       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zh45q" node="ha-994751-m04"
	E1004 03:22:03.121368       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xsz4w\": pod kube-proxy-xsz4w is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xsz4w" node="ha-994751-m04"
	E1004 03:22:03.121569       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4f6e672a-e80b-4f45-b3a5-98dfa1ebaad3(kube-system/kube-proxy-xsz4w) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xsz4w"
	E1004 03:22:03.121624       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xsz4w\": pod kube-proxy-xsz4w is already assigned to node \"ha-994751-m04\"" pod="kube-system/kube-proxy-xsz4w"
	I1004 03:22:03.121686       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xsz4w" node="ha-994751-m04"
	E1004 03:22:03.177157       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zbb9z\": pod kube-proxy-zbb9z is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zbb9z" node="ha-994751-m04"
	E1004 03:22:03.177330       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a7948b15-0522-4cbd-8803-8c139b2e791a(kube-system/kube-proxy-zbb9z) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-zbb9z"
	E1004 03:22:03.177379       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zbb9z\": pod kube-proxy-zbb9z is already assigned to node \"ha-994751-m04\"" pod="kube-system/kube-proxy-zbb9z"
	I1004 03:22:03.177445       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zbb9z" node="ha-994751-m04"
	E1004 03:22:03.177921       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qfb5r\": pod kindnet-qfb5r is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qfb5r" node="ha-994751-m04"
	E1004 03:22:03.181030       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 085d0454-1ccc-408e-ae12-366c29ab0a15(kube-system/kindnet-qfb5r) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qfb5r"
	E1004 03:22:03.181113       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qfb5r\": pod kindnet-qfb5r is already assigned to node \"ha-994751-m04\"" pod="kube-system/kindnet-qfb5r"
	I1004 03:22:03.181162       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qfb5r" node="ha-994751-m04"
	
	
	==> kubelet <==
	Oct 04 03:23:47 ha-994751 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:23:47 ha-994751 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:23:47 ha-994751 kubelet[1305]: E1004 03:23:47.373529    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012227373073617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:23:47 ha-994751 kubelet[1305]: E1004 03:23:47.373558    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012227373073617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:23:57 ha-994751 kubelet[1305]: E1004 03:23:57.376221    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012237375404117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:23:57 ha-994751 kubelet[1305]: E1004 03:23:57.376607    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012237375404117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:07 ha-994751 kubelet[1305]: E1004 03:24:07.379453    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012247378682972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:07 ha-994751 kubelet[1305]: E1004 03:24:07.379509    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012247378682972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:17 ha-994751 kubelet[1305]: E1004 03:24:17.381784    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012257381348480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:17 ha-994751 kubelet[1305]: E1004 03:24:17.382305    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012257381348480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:27 ha-994751 kubelet[1305]: E1004 03:24:27.387309    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012267384211934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:27 ha-994751 kubelet[1305]: E1004 03:24:27.387674    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012267384211934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:37 ha-994751 kubelet[1305]: E1004 03:24:37.389662    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012277389023499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:37 ha-994751 kubelet[1305]: E1004 03:24:37.390147    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012277389023499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:47 ha-994751 kubelet[1305]: E1004 03:24:47.337368    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:24:47 ha-994751 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:24:47 ha-994751 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:24:47 ha-994751 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:24:47 ha-994751 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:24:47 ha-994751 kubelet[1305]: E1004 03:24:47.393080    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012287392471580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:47 ha-994751 kubelet[1305]: E1004 03:24:47.393113    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012287392471580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:57 ha-994751 kubelet[1305]: E1004 03:24:57.395248    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012297394773017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:57 ha-994751 kubelet[1305]: E1004 03:24:57.395590    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012297394773017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:25:07 ha-994751 kubelet[1305]: E1004 03:25:07.398270    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012307397806386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:25:07 ha-994751 kubelet[1305]: E1004 03:25:07.398317    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012307397806386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-994751 -n ha-994751
helpers_test.go:261: (dbg) Run:  kubectl --context ha-994751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr: (3.987118066s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-994751 -n ha-994751
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-994751 logs -n 25: (1.585584372s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751:/home/docker/cp-test_ha-994751-m03_ha-994751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751 sudo cat                                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m02:/home/docker/cp-test_ha-994751-m03_ha-994751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m02 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04:/home/docker/cp-test_ha-994751-m03_ha-994751-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m04 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp testdata/cp-test.txt                                                | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1363640037/001/cp-test_ha-994751-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751:/home/docker/cp-test_ha-994751-m04_ha-994751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751 sudo cat                                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m02:/home/docker/cp-test_ha-994751-m04_ha-994751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m02 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03:/home/docker/cp-test_ha-994751-m04_ha-994751-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m03 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-994751 node stop m02 -v=7                                                     | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-994751 node start m02 -v=7                                                    | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 03:18:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 03:18:05.722757   30630 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:18:05.722861   30630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:18:05.722866   30630 out.go:358] Setting ErrFile to fd 2...
	I1004 03:18:05.722871   30630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:18:05.723051   30630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 03:18:05.723672   30630 out.go:352] Setting JSON to false
	I1004 03:18:05.724646   30630 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3631,"bootTime":1728008255,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 03:18:05.724743   30630 start.go:139] virtualization: kvm guest
	I1004 03:18:05.726903   30630 out.go:177] * [ha-994751] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 03:18:05.728435   30630 notify.go:220] Checking for updates...
	I1004 03:18:05.728459   30630 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:18:05.730163   30630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:18:05.731580   30630 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:18:05.733048   30630 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:05.734449   30630 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 03:18:05.735914   30630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:18:05.737675   30630 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:18:05.774405   30630 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 03:18:05.775959   30630 start.go:297] selected driver: kvm2
	I1004 03:18:05.775980   30630 start.go:901] validating driver "kvm2" against <nil>
	I1004 03:18:05.775993   30630 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:18:05.776759   30630 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:18:05.776855   30630 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 03:18:05.791915   30630 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 03:18:05.791974   30630 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 03:18:05.792218   30630 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:18:05.792245   30630 cni.go:84] Creating CNI manager for ""
	I1004 03:18:05.792281   30630 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1004 03:18:05.792289   30630 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 03:18:05.792342   30630 start.go:340] cluster config:
	{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1004 03:18:05.792429   30630 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:18:05.794321   30630 out.go:177] * Starting "ha-994751" primary control-plane node in "ha-994751" cluster
	I1004 03:18:05.795797   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:18:05.795855   30630 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 03:18:05.795867   30630 cache.go:56] Caching tarball of preloaded images
	I1004 03:18:05.795948   30630 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:18:05.795958   30630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:18:05.796250   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:05.796278   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json: {Name:mk8f786fa93ab6935652e46df2caeb1892ffd1fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:05.796426   30630 start.go:360] acquireMachinesLock for ha-994751: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:18:05.796455   30630 start.go:364] duration metric: took 15.921µs to acquireMachinesLock for "ha-994751"
	I1004 03:18:05.796470   30630 start.go:93] Provisioning new machine with config: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:18:05.796525   30630 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 03:18:05.798287   30630 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 03:18:05.798440   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:05.798475   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:05.812686   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34047
	I1004 03:18:05.813143   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:05.813678   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:05.813709   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:05.814066   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:05.814254   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:05.814407   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:05.814549   30630 start.go:159] libmachine.API.Create for "ha-994751" (driver="kvm2")
	I1004 03:18:05.814572   30630 client.go:168] LocalClient.Create starting
	I1004 03:18:05.814612   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 03:18:05.814645   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:05.814661   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:05.814721   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 03:18:05.814738   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:05.814750   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:05.814764   30630 main.go:141] libmachine: Running pre-create checks...
	I1004 03:18:05.814779   30630 main.go:141] libmachine: (ha-994751) Calling .PreCreateCheck
	I1004 03:18:05.815056   30630 main.go:141] libmachine: (ha-994751) Calling .GetConfigRaw
	I1004 03:18:05.815402   30630 main.go:141] libmachine: Creating machine...
	I1004 03:18:05.815413   30630 main.go:141] libmachine: (ha-994751) Calling .Create
	I1004 03:18:05.815566   30630 main.go:141] libmachine: (ha-994751) Creating KVM machine...
	I1004 03:18:05.816861   30630 main.go:141] libmachine: (ha-994751) DBG | found existing default KVM network
	I1004 03:18:05.817536   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:05.817406   30653 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1004 03:18:05.817563   30630 main.go:141] libmachine: (ha-994751) DBG | created network xml: 
	I1004 03:18:05.817586   30630 main.go:141] libmachine: (ha-994751) DBG | <network>
	I1004 03:18:05.817592   30630 main.go:141] libmachine: (ha-994751) DBG |   <name>mk-ha-994751</name>
	I1004 03:18:05.817597   30630 main.go:141] libmachine: (ha-994751) DBG |   <dns enable='no'/>
	I1004 03:18:05.817602   30630 main.go:141] libmachine: (ha-994751) DBG |   
	I1004 03:18:05.817610   30630 main.go:141] libmachine: (ha-994751) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1004 03:18:05.817615   30630 main.go:141] libmachine: (ha-994751) DBG |     <dhcp>
	I1004 03:18:05.817621   30630 main.go:141] libmachine: (ha-994751) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1004 03:18:05.817629   30630 main.go:141] libmachine: (ha-994751) DBG |     </dhcp>
	I1004 03:18:05.817644   30630 main.go:141] libmachine: (ha-994751) DBG |   </ip>
	I1004 03:18:05.817652   30630 main.go:141] libmachine: (ha-994751) DBG |   
	I1004 03:18:05.817659   30630 main.go:141] libmachine: (ha-994751) DBG | </network>
	I1004 03:18:05.817668   30630 main.go:141] libmachine: (ha-994751) DBG | 
	I1004 03:18:05.823178   30630 main.go:141] libmachine: (ha-994751) DBG | trying to create private KVM network mk-ha-994751 192.168.39.0/24...
	I1004 03:18:05.886885   30630 main.go:141] libmachine: (ha-994751) DBG | private KVM network mk-ha-994751 192.168.39.0/24 created
	I1004 03:18:05.886925   30630 main.go:141] libmachine: (ha-994751) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751 ...
	I1004 03:18:05.886940   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:05.886875   30653 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:05.886958   30630 main.go:141] libmachine: (ha-994751) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 03:18:05.887024   30630 main.go:141] libmachine: (ha-994751) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 03:18:06.142449   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:06.142299   30653 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa...
	I1004 03:18:06.210635   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:06.210526   30653 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/ha-994751.rawdisk...
	I1004 03:18:06.210664   30630 main.go:141] libmachine: (ha-994751) DBG | Writing magic tar header
	I1004 03:18:06.210677   30630 main.go:141] libmachine: (ha-994751) DBG | Writing SSH key tar header
	I1004 03:18:06.210688   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:06.210638   30653 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751 ...
	I1004 03:18:06.210755   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751
	I1004 03:18:06.210796   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751 (perms=drwx------)
	I1004 03:18:06.210813   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 03:18:06.210829   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:06.210837   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 03:18:06.210844   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 03:18:06.210850   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins
	I1004 03:18:06.210857   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 03:18:06.210924   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 03:18:06.210944   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 03:18:06.210949   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home
	I1004 03:18:06.210957   30630 main.go:141] libmachine: (ha-994751) DBG | Skipping /home - not owner
	I1004 03:18:06.210976   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 03:18:06.210990   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 03:18:06.210999   30630 main.go:141] libmachine: (ha-994751) Creating domain...
	I1004 03:18:06.212079   30630 main.go:141] libmachine: (ha-994751) define libvirt domain using xml: 
	I1004 03:18:06.212103   30630 main.go:141] libmachine: (ha-994751) <domain type='kvm'>
	I1004 03:18:06.212112   30630 main.go:141] libmachine: (ha-994751)   <name>ha-994751</name>
	I1004 03:18:06.212118   30630 main.go:141] libmachine: (ha-994751)   <memory unit='MiB'>2200</memory>
	I1004 03:18:06.212126   30630 main.go:141] libmachine: (ha-994751)   <vcpu>2</vcpu>
	I1004 03:18:06.212132   30630 main.go:141] libmachine: (ha-994751)   <features>
	I1004 03:18:06.212140   30630 main.go:141] libmachine: (ha-994751)     <acpi/>
	I1004 03:18:06.212152   30630 main.go:141] libmachine: (ha-994751)     <apic/>
	I1004 03:18:06.212164   30630 main.go:141] libmachine: (ha-994751)     <pae/>
	I1004 03:18:06.212177   30630 main.go:141] libmachine: (ha-994751)     
	I1004 03:18:06.212187   30630 main.go:141] libmachine: (ha-994751)   </features>
	I1004 03:18:06.212192   30630 main.go:141] libmachine: (ha-994751)   <cpu mode='host-passthrough'>
	I1004 03:18:06.212196   30630 main.go:141] libmachine: (ha-994751)   
	I1004 03:18:06.212200   30630 main.go:141] libmachine: (ha-994751)   </cpu>
	I1004 03:18:06.212204   30630 main.go:141] libmachine: (ha-994751)   <os>
	I1004 03:18:06.212210   30630 main.go:141] libmachine: (ha-994751)     <type>hvm</type>
	I1004 03:18:06.212215   30630 main.go:141] libmachine: (ha-994751)     <boot dev='cdrom'/>
	I1004 03:18:06.212228   30630 main.go:141] libmachine: (ha-994751)     <boot dev='hd'/>
	I1004 03:18:06.212253   30630 main.go:141] libmachine: (ha-994751)     <bootmenu enable='no'/>
	I1004 03:18:06.212268   30630 main.go:141] libmachine: (ha-994751)   </os>
	I1004 03:18:06.212286   30630 main.go:141] libmachine: (ha-994751)   <devices>
	I1004 03:18:06.212296   30630 main.go:141] libmachine: (ha-994751)     <disk type='file' device='cdrom'>
	I1004 03:18:06.212309   30630 main.go:141] libmachine: (ha-994751)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/boot2docker.iso'/>
	I1004 03:18:06.212319   30630 main.go:141] libmachine: (ha-994751)       <target dev='hdc' bus='scsi'/>
	I1004 03:18:06.212330   30630 main.go:141] libmachine: (ha-994751)       <readonly/>
	I1004 03:18:06.212334   30630 main.go:141] libmachine: (ha-994751)     </disk>
	I1004 03:18:06.212342   30630 main.go:141] libmachine: (ha-994751)     <disk type='file' device='disk'>
	I1004 03:18:06.212354   30630 main.go:141] libmachine: (ha-994751)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 03:18:06.212370   30630 main.go:141] libmachine: (ha-994751)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/ha-994751.rawdisk'/>
	I1004 03:18:06.212380   30630 main.go:141] libmachine: (ha-994751)       <target dev='hda' bus='virtio'/>
	I1004 03:18:06.212388   30630 main.go:141] libmachine: (ha-994751)     </disk>
	I1004 03:18:06.212397   30630 main.go:141] libmachine: (ha-994751)     <interface type='network'>
	I1004 03:18:06.212406   30630 main.go:141] libmachine: (ha-994751)       <source network='mk-ha-994751'/>
	I1004 03:18:06.212415   30630 main.go:141] libmachine: (ha-994751)       <model type='virtio'/>
	I1004 03:18:06.212440   30630 main.go:141] libmachine: (ha-994751)     </interface>
	I1004 03:18:06.212460   30630 main.go:141] libmachine: (ha-994751)     <interface type='network'>
	I1004 03:18:06.212467   30630 main.go:141] libmachine: (ha-994751)       <source network='default'/>
	I1004 03:18:06.212471   30630 main.go:141] libmachine: (ha-994751)       <model type='virtio'/>
	I1004 03:18:06.212479   30630 main.go:141] libmachine: (ha-994751)     </interface>
	I1004 03:18:06.212494   30630 main.go:141] libmachine: (ha-994751)     <serial type='pty'>
	I1004 03:18:06.212502   30630 main.go:141] libmachine: (ha-994751)       <target port='0'/>
	I1004 03:18:06.212508   30630 main.go:141] libmachine: (ha-994751)     </serial>
	I1004 03:18:06.212516   30630 main.go:141] libmachine: (ha-994751)     <console type='pty'>
	I1004 03:18:06.212520   30630 main.go:141] libmachine: (ha-994751)       <target type='serial' port='0'/>
	I1004 03:18:06.212542   30630 main.go:141] libmachine: (ha-994751)     </console>
	I1004 03:18:06.212560   30630 main.go:141] libmachine: (ha-994751)     <rng model='virtio'>
	I1004 03:18:06.212574   30630 main.go:141] libmachine: (ha-994751)       <backend model='random'>/dev/random</backend>
	I1004 03:18:06.212585   30630 main.go:141] libmachine: (ha-994751)     </rng>
	I1004 03:18:06.212593   30630 main.go:141] libmachine: (ha-994751)     
	I1004 03:18:06.212602   30630 main.go:141] libmachine: (ha-994751)     
	I1004 03:18:06.212610   30630 main.go:141] libmachine: (ha-994751)   </devices>
	I1004 03:18:06.212618   30630 main.go:141] libmachine: (ha-994751) </domain>
	I1004 03:18:06.212627   30630 main.go:141] libmachine: (ha-994751) 
	I1004 03:18:06.216801   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:e9:7d:48 in network default
	I1004 03:18:06.217289   30630 main.go:141] libmachine: (ha-994751) Ensuring networks are active...
	I1004 03:18:06.217308   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:06.217978   30630 main.go:141] libmachine: (ha-994751) Ensuring network default is active
	I1004 03:18:06.218330   30630 main.go:141] libmachine: (ha-994751) Ensuring network mk-ha-994751 is active
	I1004 03:18:06.218792   30630 main.go:141] libmachine: (ha-994751) Getting domain xml...
	I1004 03:18:06.219458   30630 main.go:141] libmachine: (ha-994751) Creating domain...
	I1004 03:18:07.407094   30630 main.go:141] libmachine: (ha-994751) Waiting to get IP...
	I1004 03:18:07.407817   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:07.408229   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:07.408273   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:07.408187   30653 retry.go:31] will retry after 265.096314ms: waiting for machine to come up
	I1004 03:18:07.674734   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:07.675129   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:07.675155   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:07.675076   30653 retry.go:31] will retry after 390.620211ms: waiting for machine to come up
	I1004 03:18:08.067622   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:08.068086   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:08.068114   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:08.068031   30653 retry.go:31] will retry after 362.909556ms: waiting for machine to come up
	I1004 03:18:08.432460   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:08.432888   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:08.432909   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:08.432822   30653 retry.go:31] will retry after 609.869022ms: waiting for machine to come up
	I1004 03:18:09.044728   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:09.045180   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:09.045206   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:09.045129   30653 retry.go:31] will retry after 721.849297ms: waiting for machine to come up
	I1004 03:18:09.769005   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:09.769517   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:09.769542   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:09.769465   30653 retry.go:31] will retry after 920.066652ms: waiting for machine to come up
	I1004 03:18:10.691477   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:10.691934   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:10.691982   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:10.691880   30653 retry.go:31] will retry after 915.375779ms: waiting for machine to come up
	I1004 03:18:11.608614   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:11.609000   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:11.609026   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:11.608956   30653 retry.go:31] will retry after 1.213056064s: waiting for machine to come up
	I1004 03:18:12.823425   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:12.823843   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:12.823863   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:12.823799   30653 retry.go:31] will retry after 1.167496597s: waiting for machine to come up
	I1004 03:18:13.993222   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:13.993651   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:13.993670   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:13.993625   30653 retry.go:31] will retry after 1.774059142s: waiting for machine to come up
	I1004 03:18:15.769014   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:15.769477   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:15.769521   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:15.769420   30653 retry.go:31] will retry after 2.081580382s: waiting for machine to come up
	I1004 03:18:17.853131   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:17.853479   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:17.853503   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:17.853441   30653 retry.go:31] will retry after 3.090115259s: waiting for machine to come up
	I1004 03:18:20.945030   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:20.945469   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:20.945493   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:20.945409   30653 retry.go:31] will retry after 4.314609333s: waiting for machine to come up
	I1004 03:18:25.264846   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:25.265316   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:25.265335   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:25.265278   30653 retry.go:31] will retry after 4.302479318s: waiting for machine to come up
	I1004 03:18:29.572575   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.572946   30630 main.go:141] libmachine: (ha-994751) Found IP for machine: 192.168.39.65
	I1004 03:18:29.572975   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has current primary IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.572983   30630 main.go:141] libmachine: (ha-994751) Reserving static IP address...
	I1004 03:18:29.573371   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find host DHCP lease matching {name: "ha-994751", mac: "52:54:00:9b:b2:a8", ip: "192.168.39.65"} in network mk-ha-994751
	I1004 03:18:29.642317   30630 main.go:141] libmachine: (ha-994751) DBG | Getting to WaitForSSH function...
	I1004 03:18:29.642344   30630 main.go:141] libmachine: (ha-994751) Reserved static IP address: 192.168.39.65
	I1004 03:18:29.642356   30630 main.go:141] libmachine: (ha-994751) Waiting for SSH to be available...
	I1004 03:18:29.644819   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.645174   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.645189   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.645350   30630 main.go:141] libmachine: (ha-994751) DBG | Using SSH client type: external
	I1004 03:18:29.645373   30630 main.go:141] libmachine: (ha-994751) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa (-rw-------)
	I1004 03:18:29.645433   30630 main.go:141] libmachine: (ha-994751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:18:29.645459   30630 main.go:141] libmachine: (ha-994751) DBG | About to run SSH command:
	I1004 03:18:29.645475   30630 main.go:141] libmachine: (ha-994751) DBG | exit 0
	I1004 03:18:29.768066   30630 main.go:141] libmachine: (ha-994751) DBG | SSH cmd err, output: <nil>: 
	I1004 03:18:29.768301   30630 main.go:141] libmachine: (ha-994751) KVM machine creation complete!
	I1004 03:18:29.768621   30630 main.go:141] libmachine: (ha-994751) Calling .GetConfigRaw
	I1004 03:18:29.769131   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:29.769285   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:29.769480   30630 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 03:18:29.769497   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:29.770831   30630 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 03:18:29.770850   30630 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 03:18:29.770858   30630 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 03:18:29.770868   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:29.772990   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.773299   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.773321   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.773460   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:29.773635   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.773787   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.773964   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:29.774099   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:29.774324   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:29.774336   30630 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 03:18:29.870824   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:18:29.870852   30630 main.go:141] libmachine: Detecting the provisioner...
	I1004 03:18:29.870864   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:29.873067   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.873430   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.873464   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.873650   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:29.873816   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.873947   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.874038   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:29.874214   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:29.874367   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:29.874377   30630 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 03:18:29.972554   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 03:18:29.972627   30630 main.go:141] libmachine: found compatible host: buildroot
	I1004 03:18:29.972634   30630 main.go:141] libmachine: Provisioning with buildroot...
	I1004 03:18:29.972640   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:29.972883   30630 buildroot.go:166] provisioning hostname "ha-994751"
	I1004 03:18:29.972906   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:29.973092   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:29.975627   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.976040   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.976059   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.976197   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:29.976336   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.976489   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.976626   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:29.976745   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:29.976951   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:29.976969   30630 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-994751 && echo "ha-994751" | sudo tee /etc/hostname
	I1004 03:18:30.090454   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751
	
	I1004 03:18:30.090480   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.094372   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.094783   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.094812   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.094993   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.095167   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.095331   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.095446   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.095586   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:30.095799   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:30.095822   30630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-994751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-994751/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-994751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:18:30.200998   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:18:30.201031   30630 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:18:30.201106   30630 buildroot.go:174] setting up certificates
	I1004 03:18:30.201120   30630 provision.go:84] configureAuth start
	I1004 03:18:30.201131   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:30.201353   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:30.203920   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.204369   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.204390   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.204563   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.206770   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.207168   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.207195   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.207325   30630 provision.go:143] copyHostCerts
	I1004 03:18:30.207355   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:18:30.207398   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:18:30.207407   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:18:30.207474   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:18:30.207553   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:18:30.207574   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:18:30.207581   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:18:30.207605   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:18:30.207644   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:18:30.207661   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:18:30.207671   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:18:30.207691   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:18:30.207739   30630 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.ha-994751 san=[127.0.0.1 192.168.39.65 ha-994751 localhost minikube]
	I1004 03:18:30.399105   30630 provision.go:177] copyRemoteCerts
	I1004 03:18:30.399156   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:18:30.399185   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.401949   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.402239   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.402273   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.402458   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.402612   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.402732   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.402824   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:30.481271   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:18:30.481342   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:18:30.505491   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:18:30.505567   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:18:30.528533   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:18:30.528602   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1004 03:18:30.551611   30630 provision.go:87] duration metric: took 350.480163ms to configureAuth
	I1004 03:18:30.551641   30630 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:18:30.551807   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:18:30.551909   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.554312   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.554641   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.554668   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.554833   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.554998   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.555138   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.555257   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.555398   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:30.555570   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:30.555585   30630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:18:30.762357   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:18:30.762381   30630 main.go:141] libmachine: Checking connection to Docker...
	I1004 03:18:30.762388   30630 main.go:141] libmachine: (ha-994751) Calling .GetURL
	I1004 03:18:30.763606   30630 main.go:141] libmachine: (ha-994751) DBG | Using libvirt version 6000000
	I1004 03:18:30.765692   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.766020   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.766048   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.766206   30630 main.go:141] libmachine: Docker is up and running!
	I1004 03:18:30.766228   30630 main.go:141] libmachine: Reticulating splines...
	I1004 03:18:30.766236   30630 client.go:171] duration metric: took 24.951657625s to LocalClient.Create
	I1004 03:18:30.766258   30630 start.go:167] duration metric: took 24.951708327s to libmachine.API.Create "ha-994751"
	I1004 03:18:30.766279   30630 start.go:293] postStartSetup for "ha-994751" (driver="kvm2")
	I1004 03:18:30.766291   30630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:18:30.766310   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.766550   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:18:30.766573   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.768581   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.768893   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.768918   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.769018   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.769215   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.769374   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.769501   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:30.850107   30630 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:18:30.854350   30630 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:18:30.854372   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:18:30.854448   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:18:30.854554   30630 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:18:30.854567   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:18:30.854687   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:18:30.863939   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:18:30.887968   30630 start.go:296] duration metric: took 121.677235ms for postStartSetup
	I1004 03:18:30.888032   30630 main.go:141] libmachine: (ha-994751) Calling .GetConfigRaw
	I1004 03:18:30.888647   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:30.891188   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.891538   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.891578   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.891766   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:30.891959   30630 start.go:128] duration metric: took 25.095424862s to createHost
	I1004 03:18:30.891980   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.894352   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.894614   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.894640   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.894753   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.894910   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.895041   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.895137   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.895264   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:30.895466   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:30.895480   30630 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:18:30.992599   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011910.970126057
	
	I1004 03:18:30.992618   30630 fix.go:216] guest clock: 1728011910.970126057
	I1004 03:18:30.992625   30630 fix.go:229] Guest: 2024-10-04 03:18:30.970126057 +0000 UTC Remote: 2024-10-04 03:18:30.89197094 +0000 UTC m=+25.204801944 (delta=78.155117ms)
	I1004 03:18:30.992662   30630 fix.go:200] guest clock delta is within tolerance: 78.155117ms
	I1004 03:18:30.992667   30630 start.go:83] releasing machines lock for "ha-994751", held for 25.19620396s
	I1004 03:18:30.992685   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.992896   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:30.995326   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.995629   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.995653   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.995813   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.996311   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.996458   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.996541   30630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:18:30.996578   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.996668   30630 ssh_runner.go:195] Run: cat /version.json
	I1004 03:18:30.996687   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.999188   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999227   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999574   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.999599   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999648   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.999673   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999727   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.999923   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.999936   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:31.000065   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:31.000137   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:31.000197   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:31.000242   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:31.000338   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:31.092724   30630 ssh_runner.go:195] Run: systemctl --version
	I1004 03:18:31.098738   30630 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:18:31.257592   30630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 03:18:31.263326   30630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:18:31.263402   30630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:18:31.278780   30630 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 03:18:31.278800   30630 start.go:495] detecting cgroup driver to use...
	I1004 03:18:31.278866   30630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:18:31.295874   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:18:31.310006   30630 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:18:31.310076   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:18:31.323189   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:18:31.336586   30630 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:18:31.452424   30630 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:18:31.611505   30630 docker.go:233] disabling docker service ...
	I1004 03:18:31.611576   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:18:31.625795   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:18:31.640666   30630 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:18:31.774429   30630 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:18:31.903530   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:18:31.917157   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:18:31.935039   30630 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:18:31.935118   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.945550   30630 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:18:31.945617   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.955961   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.966381   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.976764   30630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:18:31.987308   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.997608   30630 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:32.014334   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:32.025406   30630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:18:32.035105   30630 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 03:18:32.035157   30630 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 03:18:32.048803   30630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:18:32.058421   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:18:32.175897   30630 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:18:32.272377   30630 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:18:32.272435   30630 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:18:32.277743   30630 start.go:563] Will wait 60s for crictl version
	I1004 03:18:32.277805   30630 ssh_runner.go:195] Run: which crictl
	I1004 03:18:32.281362   30630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:18:32.318848   30630 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:18:32.318925   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:18:32.346909   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:18:32.375477   30630 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:18:32.376825   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:32.379208   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:32.379571   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:32.379594   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:32.379801   30630 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:18:32.384207   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:18:32.397053   30630 kubeadm.go:883] updating cluster {Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 03:18:32.397153   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:18:32.397223   30630 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:18:32.434648   30630 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 03:18:32.434703   30630 ssh_runner.go:195] Run: which lz4
	I1004 03:18:32.438603   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1004 03:18:32.438682   30630 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 03:18:32.442788   30630 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 03:18:32.442821   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 03:18:33.747633   30630 crio.go:462] duration metric: took 1.308983475s to copy over tarball
	I1004 03:18:33.747699   30630 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 03:18:35.713127   30630 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.965391744s)
	I1004 03:18:35.713157   30630 crio.go:469] duration metric: took 1.965495286s to extract the tarball
	I1004 03:18:35.713167   30630 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 03:18:35.749886   30630 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:18:35.795226   30630 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:18:35.795249   30630 cache_images.go:84] Images are preloaded, skipping loading
	I1004 03:18:35.795257   30630 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.31.1 crio true true} ...
	I1004 03:18:35.795346   30630 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-994751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:18:35.795408   30630 ssh_runner.go:195] Run: crio config
	I1004 03:18:35.841695   30630 cni.go:84] Creating CNI manager for ""
	I1004 03:18:35.841718   30630 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1004 03:18:35.841728   30630 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 03:18:35.841746   30630 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-994751 NodeName:ha-994751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 03:18:35.841868   30630 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-994751"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 03:18:35.841893   30630 kube-vip.go:115] generating kube-vip config ...
	I1004 03:18:35.841933   30630 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1004 03:18:35.858111   30630 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1004 03:18:35.858218   30630 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:18:35.858274   30630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:18:35.867809   30630 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 03:18:35.867872   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1004 03:18:35.876830   30630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1004 03:18:35.892172   30630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:18:35.907631   30630 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1004 03:18:35.923147   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1004 03:18:35.939242   30630 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:18:35.943241   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:18:35.955036   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:18:36.063830   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:18:36.080131   30630 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751 for IP: 192.168.39.65
	I1004 03:18:36.080153   30630 certs.go:194] generating shared ca certs ...
	I1004 03:18:36.080169   30630 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.080303   30630 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:18:36.080336   30630 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:18:36.080345   30630 certs.go:256] generating profile certs ...
	I1004 03:18:36.080388   30630 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key
	I1004 03:18:36.080414   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt with IP's: []
	I1004 03:18:36.205325   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt ...
	I1004 03:18:36.205354   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt: {Name:mk097459d54d355cf05d74a196b72b51ed16216c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.205539   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key ...
	I1004 03:18:36.205553   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key: {Name:mka6efef398570320df79b26ee2d84116b88400b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.205628   30630 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35
	I1004 03:18:36.205642   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65 192.168.39.254]
	I1004 03:18:36.278398   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35 ...
	I1004 03:18:36.278426   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35: {Name:mk5a54fedcb658e02d5a59c4cc7f959d0efc3b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.278574   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35 ...
	I1004 03:18:36.278586   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35: {Name:mk30bcb47c9e314eff3c9b6a3bb1c1b8ba019417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.278653   30630 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt
	I1004 03:18:36.278741   30630 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key
	I1004 03:18:36.278802   30630 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key
	I1004 03:18:36.278825   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt with IP's: []
	I1004 03:18:36.411462   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt ...
	I1004 03:18:36.411499   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt: {Name:mk5cbb9b0a13c8121c937d53956001313fc362d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.411652   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key ...
	I1004 03:18:36.411663   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key: {Name:mkcfa953ddb2aa55fb392dd2b0300dc4d7ed9a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.411729   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:18:36.411745   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:18:36.411758   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:18:36.411771   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:18:36.411798   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:18:36.411811   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:18:36.411823   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:18:36.411835   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:18:36.411884   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:18:36.411919   30630 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:18:36.411928   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:18:36.411953   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:18:36.411976   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:18:36.411996   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:18:36.412030   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:18:36.412053   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.412066   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.412078   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.412548   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:18:36.441146   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:18:36.468175   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:18:36.494488   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:18:36.520930   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1004 03:18:36.546306   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 03:18:36.571622   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:18:36.595650   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:18:36.619154   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:18:36.643284   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:18:36.666998   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:18:36.692308   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 03:18:36.710569   30630 ssh_runner.go:195] Run: openssl version
	I1004 03:18:36.722532   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:18:36.738971   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.743511   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.743568   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.749416   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:18:36.760315   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:18:36.771516   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.776032   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.776090   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.781784   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:18:36.792883   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:18:36.804051   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.808536   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.808596   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.814260   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:18:36.827637   30630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:18:36.833576   30630 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 03:18:36.833628   30630 kubeadm.go:392] StartCluster: {Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:18:36.833720   30630 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 03:18:36.833768   30630 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 03:18:36.890855   30630 cri.go:89] found id: ""
	I1004 03:18:36.890927   30630 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 03:18:36.902870   30630 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 03:18:36.912801   30630 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 03:18:36.922312   30630 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 03:18:36.922332   30630 kubeadm.go:157] found existing configuration files:
	
	I1004 03:18:36.922378   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 03:18:36.931373   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 03:18:36.931434   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 03:18:36.940992   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 03:18:36.949951   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 03:18:36.950008   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 03:18:36.959253   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 03:18:36.968235   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 03:18:36.968290   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 03:18:36.977554   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 03:18:36.986351   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 03:18:36.986408   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 03:18:36.995719   30630 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 03:18:37.089352   30630 kubeadm.go:310] W1004 03:18:37.073375     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 03:18:37.090411   30630 kubeadm.go:310] W1004 03:18:37.074383     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 03:18:37.191769   30630 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 03:18:47.918991   30630 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 03:18:47.919112   30630 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 03:18:47.919261   30630 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 03:18:47.919365   30630 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 03:18:47.919464   30630 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 03:18:47.919518   30630 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 03:18:47.920818   30630 out.go:235]   - Generating certificates and keys ...
	I1004 03:18:47.920882   30630 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 03:18:47.920936   30630 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 03:18:47.921009   30630 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 03:18:47.921075   30630 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1004 03:18:47.921133   30630 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1004 03:18:47.921203   30630 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1004 03:18:47.921280   30630 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1004 03:18:47.921443   30630 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-994751 localhost] and IPs [192.168.39.65 127.0.0.1 ::1]
	I1004 03:18:47.921519   30630 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1004 03:18:47.921666   30630 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-994751 localhost] and IPs [192.168.39.65 127.0.0.1 ::1]
	I1004 03:18:47.921762   30630 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 03:18:47.921849   30630 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 03:18:47.921910   30630 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1004 03:18:47.922005   30630 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 03:18:47.922057   30630 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 03:18:47.922112   30630 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 03:18:47.922177   30630 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 03:18:47.922290   30630 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 03:18:47.922377   30630 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 03:18:47.922447   30630 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 03:18:47.922519   30630 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 03:18:47.923983   30630 out.go:235]   - Booting up control plane ...
	I1004 03:18:47.924085   30630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 03:18:47.924153   30630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 03:18:47.924208   30630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 03:18:47.924334   30630 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 03:18:47.924425   30630 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 03:18:47.924472   30630 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 03:18:47.924582   30630 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 03:18:47.924675   30630 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 03:18:47.924735   30630 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001267899s
	I1004 03:18:47.924846   30630 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 03:18:47.924901   30630 kubeadm.go:310] [api-check] The API server is healthy after 5.62627754s
	I1004 03:18:47.924992   30630 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 03:18:47.925097   30630 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 03:18:47.925151   30630 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 03:18:47.925310   30630 kubeadm.go:310] [mark-control-plane] Marking the node ha-994751 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 03:18:47.925388   30630 kubeadm.go:310] [bootstrap-token] Using token: t8dola.kmwzcq881z4dnfcq
	I1004 03:18:47.926624   30630 out.go:235]   - Configuring RBAC rules ...
	I1004 03:18:47.926738   30630 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 03:18:47.926809   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 03:18:47.926957   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 03:18:47.927140   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 03:18:47.927310   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 03:18:47.927398   30630 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 03:18:47.927508   30630 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 03:18:47.927559   30630 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 03:18:47.927607   30630 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 03:18:47.927613   30630 kubeadm.go:310] 
	I1004 03:18:47.927661   30630 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 03:18:47.927667   30630 kubeadm.go:310] 
	I1004 03:18:47.927736   30630 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 03:18:47.927742   30630 kubeadm.go:310] 
	I1004 03:18:47.927764   30630 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 03:18:47.927863   30630 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 03:18:47.927918   30630 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 03:18:47.927926   30630 kubeadm.go:310] 
	I1004 03:18:47.927996   30630 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 03:18:47.928006   30630 kubeadm.go:310] 
	I1004 03:18:47.928052   30630 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 03:18:47.928059   30630 kubeadm.go:310] 
	I1004 03:18:47.928102   30630 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 03:18:47.928189   30630 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 03:18:47.928261   30630 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 03:18:47.928268   30630 kubeadm.go:310] 
	I1004 03:18:47.928337   30630 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 03:18:47.928401   30630 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 03:18:47.928407   30630 kubeadm.go:310] 
	I1004 03:18:47.928480   30630 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t8dola.kmwzcq881z4dnfcq \
	I1004 03:18:47.928565   30630 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 \
	I1004 03:18:47.928587   30630 kubeadm.go:310] 	--control-plane 
	I1004 03:18:47.928593   30630 kubeadm.go:310] 
	I1004 03:18:47.928677   30630 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 03:18:47.928689   30630 kubeadm.go:310] 
	I1004 03:18:47.928756   30630 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t8dola.kmwzcq881z4dnfcq \
	I1004 03:18:47.928856   30630 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 
	I1004 03:18:47.928865   30630 cni.go:84] Creating CNI manager for ""
	I1004 03:18:47.928870   30630 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1004 03:18:47.930177   30630 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1004 03:18:47.931356   30630 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1004 03:18:47.936846   30630 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1004 03:18:47.936861   30630 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1004 03:18:47.954946   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 03:18:48.341839   30630 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 03:18:48.341927   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-994751 minikube.k8s.io/updated_at=2024_10_04T03_18_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-994751 minikube.k8s.io/primary=true
	I1004 03:18:48.341931   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:48.378883   30630 ops.go:34] apiserver oom_adj: -16
	I1004 03:18:48.535248   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:49.035895   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:49.535506   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:50.036160   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:50.536177   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:51.036074   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:51.535453   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:52.036318   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:52.141351   30630 kubeadm.go:1113] duration metric: took 3.799503635s to wait for elevateKubeSystemPrivileges
	I1004 03:18:52.141482   30630 kubeadm.go:394] duration metric: took 15.307852794s to StartCluster
	I1004 03:18:52.141506   30630 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:52.141595   30630 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:18:52.142340   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:52.142543   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 03:18:52.142540   30630 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:18:52.142619   30630 start.go:241] waiting for startup goroutines ...
	I1004 03:18:52.142559   30630 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 03:18:52.142650   30630 addons.go:69] Setting default-storageclass=true in profile "ha-994751"
	I1004 03:18:52.142673   30630 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-994751"
	I1004 03:18:52.142653   30630 addons.go:69] Setting storage-provisioner=true in profile "ha-994751"
	I1004 03:18:52.142785   30630 addons.go:234] Setting addon storage-provisioner=true in "ha-994751"
	I1004 03:18:52.142836   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:18:52.142751   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:18:52.143105   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.143135   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.143203   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.143243   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.158739   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38855
	I1004 03:18:52.159139   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.159746   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.159801   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.160123   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.160704   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.160750   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.163696   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35851
	I1004 03:18:52.164259   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.164849   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.164876   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.165236   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.165397   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:52.167571   30630 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:18:52.167892   30630 kapi.go:59] client config for ha-994751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key", CAFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 03:18:52.168431   30630 cert_rotation.go:140] Starting client certificate rotation controller
	I1004 03:18:52.168621   30630 addons.go:234] Setting addon default-storageclass=true in "ha-994751"
	I1004 03:18:52.168661   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:18:52.168962   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.168995   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.177647   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33667
	I1004 03:18:52.178272   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.178780   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.178807   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.179185   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.179369   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:52.181245   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:52.182949   30630 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 03:18:52.184312   30630 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 03:18:52.184328   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 03:18:52.184342   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:52.185802   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36897
	I1004 03:18:52.186249   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.186707   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.186731   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.187053   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.187403   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.187660   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.187699   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.187838   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:52.187860   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.188023   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:52.188171   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:52.188318   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:52.188522   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:52.202680   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36473
	I1004 03:18:52.203159   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.203886   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.203918   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.204247   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.204428   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:52.205967   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:52.206173   30630 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 03:18:52.206189   30630 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 03:18:52.206206   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:52.208832   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.209269   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:52.209304   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.209405   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:52.209567   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:52.209709   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:52.209838   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:52.346822   30630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 03:18:52.355141   30630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 03:18:52.371008   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 03:18:52.715722   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.715742   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.716027   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.716068   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.716084   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.716095   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.716104   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.716350   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.716358   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.716370   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.716432   30630 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1004 03:18:52.716457   30630 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1004 03:18:52.716568   30630 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1004 03:18:52.716579   30630 round_trippers.go:469] Request Headers:
	I1004 03:18:52.716592   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:18:52.716603   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:18:52.723900   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:18:52.724457   30630 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1004 03:18:52.724472   30630 round_trippers.go:469] Request Headers:
	I1004 03:18:52.724481   30630 round_trippers.go:473]     Content-Type: application/json
	I1004 03:18:52.724485   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:18:52.724494   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:18:52.728158   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:18:52.728358   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.728379   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.728631   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.728667   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.728678   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.991032   30630 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1004 03:18:52.991106   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.991118   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.991464   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.991518   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.991525   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.991538   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.991549   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.991787   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.991815   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.991835   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.993564   30630 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1004 03:18:52.994914   30630 addons.go:510] duration metric: took 852.347466ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1004 03:18:52.994963   30630 start.go:246] waiting for cluster config update ...
	I1004 03:18:52.994978   30630 start.go:255] writing updated cluster config ...
	I1004 03:18:52.996475   30630 out.go:201] 
	I1004 03:18:52.997828   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:18:52.997937   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:52.999684   30630 out.go:177] * Starting "ha-994751-m02" control-plane node in "ha-994751" cluster
	I1004 03:18:53.001098   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:18:53.001129   30630 cache.go:56] Caching tarball of preloaded images
	I1004 03:18:53.001252   30630 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:18:53.001270   30630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:18:53.001389   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:53.001704   30630 start.go:360] acquireMachinesLock for ha-994751-m02: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:18:53.001767   30630 start.go:364] duration metric: took 36.717µs to acquireMachinesLock for "ha-994751-m02"
	I1004 03:18:53.001788   30630 start.go:93] Provisioning new machine with config: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:18:53.001888   30630 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1004 03:18:53.003601   30630 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 03:18:53.003685   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:53.003710   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:53.018286   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I1004 03:18:53.018739   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:53.019227   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:53.019248   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:53.019586   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:53.019746   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:18:53.019878   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:18:53.020036   30630 start.go:159] libmachine.API.Create for "ha-994751" (driver="kvm2")
	I1004 03:18:53.020058   30630 client.go:168] LocalClient.Create starting
	I1004 03:18:53.020084   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 03:18:53.020121   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:53.020141   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:53.020189   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 03:18:53.020206   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:53.020216   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:53.020231   30630 main.go:141] libmachine: Running pre-create checks...
	I1004 03:18:53.020238   30630 main.go:141] libmachine: (ha-994751-m02) Calling .PreCreateCheck
	I1004 03:18:53.020407   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetConfigRaw
	I1004 03:18:53.020742   30630 main.go:141] libmachine: Creating machine...
	I1004 03:18:53.020759   30630 main.go:141] libmachine: (ha-994751-m02) Calling .Create
	I1004 03:18:53.020907   30630 main.go:141] libmachine: (ha-994751-m02) Creating KVM machine...
	I1004 03:18:53.022100   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found existing default KVM network
	I1004 03:18:53.022275   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found existing private KVM network mk-ha-994751
	I1004 03:18:53.022411   30630 main.go:141] libmachine: (ha-994751-m02) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02 ...
	I1004 03:18:53.022435   30630 main.go:141] libmachine: (ha-994751-m02) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 03:18:53.022495   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.022407   31016 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:53.022574   30630 main.go:141] libmachine: (ha-994751-m02) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 03:18:53.247842   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.247679   31016 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa...
	I1004 03:18:53.574709   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.574567   31016 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/ha-994751-m02.rawdisk...
	I1004 03:18:53.574744   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Writing magic tar header
	I1004 03:18:53.574759   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Writing SSH key tar header
	I1004 03:18:53.574776   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.574706   31016 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02 ...
	I1004 03:18:53.574856   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02
	I1004 03:18:53.574886   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02 (perms=drwx------)
	I1004 03:18:53.574896   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 03:18:53.574906   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 03:18:53.574926   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 03:18:53.574938   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 03:18:53.574962   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 03:18:53.574971   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:53.574979   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 03:18:53.574992   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 03:18:53.575005   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 03:18:53.575014   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins
	I1004 03:18:53.575020   30630 main.go:141] libmachine: (ha-994751-m02) Creating domain...
	I1004 03:18:53.575036   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home
	I1004 03:18:53.575046   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Skipping /home - not owner
	I1004 03:18:53.575952   30630 main.go:141] libmachine: (ha-994751-m02) define libvirt domain using xml: 
	I1004 03:18:53.575978   30630 main.go:141] libmachine: (ha-994751-m02) <domain type='kvm'>
	I1004 03:18:53.575998   30630 main.go:141] libmachine: (ha-994751-m02)   <name>ha-994751-m02</name>
	I1004 03:18:53.576012   30630 main.go:141] libmachine: (ha-994751-m02)   <memory unit='MiB'>2200</memory>
	I1004 03:18:53.576021   30630 main.go:141] libmachine: (ha-994751-m02)   <vcpu>2</vcpu>
	I1004 03:18:53.576030   30630 main.go:141] libmachine: (ha-994751-m02)   <features>
	I1004 03:18:53.576038   30630 main.go:141] libmachine: (ha-994751-m02)     <acpi/>
	I1004 03:18:53.576047   30630 main.go:141] libmachine: (ha-994751-m02)     <apic/>
	I1004 03:18:53.576055   30630 main.go:141] libmachine: (ha-994751-m02)     <pae/>
	I1004 03:18:53.576064   30630 main.go:141] libmachine: (ha-994751-m02)     
	I1004 03:18:53.576072   30630 main.go:141] libmachine: (ha-994751-m02)   </features>
	I1004 03:18:53.576082   30630 main.go:141] libmachine: (ha-994751-m02)   <cpu mode='host-passthrough'>
	I1004 03:18:53.576089   30630 main.go:141] libmachine: (ha-994751-m02)   
	I1004 03:18:53.576099   30630 main.go:141] libmachine: (ha-994751-m02)   </cpu>
	I1004 03:18:53.576106   30630 main.go:141] libmachine: (ha-994751-m02)   <os>
	I1004 03:18:53.576119   30630 main.go:141] libmachine: (ha-994751-m02)     <type>hvm</type>
	I1004 03:18:53.576130   30630 main.go:141] libmachine: (ha-994751-m02)     <boot dev='cdrom'/>
	I1004 03:18:53.576135   30630 main.go:141] libmachine: (ha-994751-m02)     <boot dev='hd'/>
	I1004 03:18:53.576144   30630 main.go:141] libmachine: (ha-994751-m02)     <bootmenu enable='no'/>
	I1004 03:18:53.576152   30630 main.go:141] libmachine: (ha-994751-m02)   </os>
	I1004 03:18:53.576165   30630 main.go:141] libmachine: (ha-994751-m02)   <devices>
	I1004 03:18:53.576176   30630 main.go:141] libmachine: (ha-994751-m02)     <disk type='file' device='cdrom'>
	I1004 03:18:53.576189   30630 main.go:141] libmachine: (ha-994751-m02)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/boot2docker.iso'/>
	I1004 03:18:53.576200   30630 main.go:141] libmachine: (ha-994751-m02)       <target dev='hdc' bus='scsi'/>
	I1004 03:18:53.576208   30630 main.go:141] libmachine: (ha-994751-m02)       <readonly/>
	I1004 03:18:53.576216   30630 main.go:141] libmachine: (ha-994751-m02)     </disk>
	I1004 03:18:53.576224   30630 main.go:141] libmachine: (ha-994751-m02)     <disk type='file' device='disk'>
	I1004 03:18:53.576236   30630 main.go:141] libmachine: (ha-994751-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 03:18:53.576251   30630 main.go:141] libmachine: (ha-994751-m02)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/ha-994751-m02.rawdisk'/>
	I1004 03:18:53.576261   30630 main.go:141] libmachine: (ha-994751-m02)       <target dev='hda' bus='virtio'/>
	I1004 03:18:53.576285   30630 main.go:141] libmachine: (ha-994751-m02)     </disk>
	I1004 03:18:53.576307   30630 main.go:141] libmachine: (ha-994751-m02)     <interface type='network'>
	I1004 03:18:53.576317   30630 main.go:141] libmachine: (ha-994751-m02)       <source network='mk-ha-994751'/>
	I1004 03:18:53.576324   30630 main.go:141] libmachine: (ha-994751-m02)       <model type='virtio'/>
	I1004 03:18:53.576335   30630 main.go:141] libmachine: (ha-994751-m02)     </interface>
	I1004 03:18:53.576342   30630 main.go:141] libmachine: (ha-994751-m02)     <interface type='network'>
	I1004 03:18:53.576368   30630 main.go:141] libmachine: (ha-994751-m02)       <source network='default'/>
	I1004 03:18:53.576386   30630 main.go:141] libmachine: (ha-994751-m02)       <model type='virtio'/>
	I1004 03:18:53.576401   30630 main.go:141] libmachine: (ha-994751-m02)     </interface>
	I1004 03:18:53.576413   30630 main.go:141] libmachine: (ha-994751-m02)     <serial type='pty'>
	I1004 03:18:53.576421   30630 main.go:141] libmachine: (ha-994751-m02)       <target port='0'/>
	I1004 03:18:53.576429   30630 main.go:141] libmachine: (ha-994751-m02)     </serial>
	I1004 03:18:53.576437   30630 main.go:141] libmachine: (ha-994751-m02)     <console type='pty'>
	I1004 03:18:53.576447   30630 main.go:141] libmachine: (ha-994751-m02)       <target type='serial' port='0'/>
	I1004 03:18:53.576455   30630 main.go:141] libmachine: (ha-994751-m02)     </console>
	I1004 03:18:53.576462   30630 main.go:141] libmachine: (ha-994751-m02)     <rng model='virtio'>
	I1004 03:18:53.576468   30630 main.go:141] libmachine: (ha-994751-m02)       <backend model='random'>/dev/random</backend>
	I1004 03:18:53.576474   30630 main.go:141] libmachine: (ha-994751-m02)     </rng>
	I1004 03:18:53.576479   30630 main.go:141] libmachine: (ha-994751-m02)     
	I1004 03:18:53.576482   30630 main.go:141] libmachine: (ha-994751-m02)     
	I1004 03:18:53.576487   30630 main.go:141] libmachine: (ha-994751-m02)   </devices>
	I1004 03:18:53.576497   30630 main.go:141] libmachine: (ha-994751-m02) </domain>
	I1004 03:18:53.576508   30630 main.go:141] libmachine: (ha-994751-m02) 
	I1004 03:18:53.583962   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:dd:b1:40 in network default
	I1004 03:18:53.584709   30630 main.go:141] libmachine: (ha-994751-m02) Ensuring networks are active...
	I1004 03:18:53.584740   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:53.585441   30630 main.go:141] libmachine: (ha-994751-m02) Ensuring network default is active
	I1004 03:18:53.585785   30630 main.go:141] libmachine: (ha-994751-m02) Ensuring network mk-ha-994751 is active
	I1004 03:18:53.586177   30630 main.go:141] libmachine: (ha-994751-m02) Getting domain xml...
	I1004 03:18:53.586870   30630 main.go:141] libmachine: (ha-994751-m02) Creating domain...
	I1004 03:18:54.836669   30630 main.go:141] libmachine: (ha-994751-m02) Waiting to get IP...
	I1004 03:18:54.837648   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:54.838068   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:54.838093   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:54.838048   31016 retry.go:31] will retry after 198.927613ms: waiting for machine to come up
	I1004 03:18:55.038453   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:55.038905   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:55.039050   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:55.039003   31016 retry.go:31] will retry after 306.415928ms: waiting for machine to come up
	I1004 03:18:55.347491   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:55.347913   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:55.347941   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:55.347876   31016 retry.go:31] will retry after 320.808758ms: waiting for machine to come up
	I1004 03:18:55.670381   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:55.670806   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:55.670832   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:55.670773   31016 retry.go:31] will retry after 393.714723ms: waiting for machine to come up
	I1004 03:18:56.066334   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:56.066789   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:56.066816   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:56.066737   31016 retry.go:31] will retry after 703.186123ms: waiting for machine to come up
	I1004 03:18:56.771284   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:56.771771   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:56.771816   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:56.771717   31016 retry.go:31] will retry after 687.11987ms: waiting for machine to come up
	I1004 03:18:57.460710   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:57.461089   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:57.461132   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:57.461080   31016 retry.go:31] will retry after 992.439827ms: waiting for machine to come up
	I1004 03:18:58.455669   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:58.456094   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:58.456109   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:58.456062   31016 retry.go:31] will retry after 1.176479657s: waiting for machine to come up
	I1004 03:18:59.634390   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:59.634814   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:59.634839   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:59.634775   31016 retry.go:31] will retry after 1.214254179s: waiting for machine to come up
	I1004 03:19:00.850238   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:00.850699   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:00.850731   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:00.850669   31016 retry.go:31] will retry after 1.755607467s: waiting for machine to come up
	I1004 03:19:02.608547   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:02.608946   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:02.608966   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:02.608910   31016 retry.go:31] will retry after 1.912286614s: waiting for machine to come up
	I1004 03:19:04.522463   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:04.522888   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:04.522917   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:04.522826   31016 retry.go:31] will retry after 2.242710645s: waiting for machine to come up
	I1004 03:19:06.766980   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:06.767510   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:06.767541   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:06.767449   31016 retry.go:31] will retry after 3.842874805s: waiting for machine to come up
	I1004 03:19:10.612857   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:10.613334   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:10.613359   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:10.613293   31016 retry.go:31] will retry after 4.05361864s: waiting for machine to come up
	I1004 03:19:14.669514   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.670029   30630 main.go:141] libmachine: (ha-994751-m02) Found IP for machine: 192.168.39.117
	I1004 03:19:14.670051   30630 main.go:141] libmachine: (ha-994751-m02) Reserving static IP address...
	I1004 03:19:14.670068   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has current primary IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.670622   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find host DHCP lease matching {name: "ha-994751-m02", mac: "52:54:00:b0:e7:80", ip: "192.168.39.117"} in network mk-ha-994751
	I1004 03:19:14.745981   30630 main.go:141] libmachine: (ha-994751-m02) Reserved static IP address: 192.168.39.117
	I1004 03:19:14.746008   30630 main.go:141] libmachine: (ha-994751-m02) Waiting for SSH to be available...
	I1004 03:19:14.746017   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Getting to WaitForSSH function...
	I1004 03:19:14.748804   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.749281   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:14.749310   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.749511   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Using SSH client type: external
	I1004 03:19:14.749551   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa (-rw-------)
	I1004 03:19:14.749581   30630 main.go:141] libmachine: (ha-994751-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:19:14.749606   30630 main.go:141] libmachine: (ha-994751-m02) DBG | About to run SSH command:
	I1004 03:19:14.749624   30630 main.go:141] libmachine: (ha-994751-m02) DBG | exit 0
	I1004 03:19:14.876139   30630 main.go:141] libmachine: (ha-994751-m02) DBG | SSH cmd err, output: <nil>: 
	I1004 03:19:14.876447   30630 main.go:141] libmachine: (ha-994751-m02) KVM machine creation complete!
	I1004 03:19:14.876809   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetConfigRaw
	I1004 03:19:14.877356   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:14.877589   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:14.877768   30630 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 03:19:14.877780   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetState
	I1004 03:19:14.879122   30630 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 03:19:14.879138   30630 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 03:19:14.879143   30630 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 03:19:14.879149   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:14.881593   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.881953   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:14.881980   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.882095   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:14.882322   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.882470   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.882643   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:14.882838   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:14.883073   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:14.883086   30630 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 03:19:14.983285   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:19:14.983306   30630 main.go:141] libmachine: Detecting the provisioner...
	I1004 03:19:14.983312   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:14.986285   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.986741   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:14.986757   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.987055   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:14.987278   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.987439   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.987656   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:14.987873   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:14.988031   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:14.988042   30630 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 03:19:15.088950   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 03:19:15.089011   30630 main.go:141] libmachine: found compatible host: buildroot
	I1004 03:19:15.089017   30630 main.go:141] libmachine: Provisioning with buildroot...
	I1004 03:19:15.089024   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:19:15.089254   30630 buildroot.go:166] provisioning hostname "ha-994751-m02"
	I1004 03:19:15.089274   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:19:15.089431   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.092470   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.092890   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.092918   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.093111   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.093289   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.093421   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.093532   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.093663   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:15.093819   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:15.093835   30630 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-994751-m02 && echo "ha-994751-m02" | sudo tee /etc/hostname
	I1004 03:19:15.206985   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751-m02
	
	I1004 03:19:15.207013   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.210129   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.210417   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.210457   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.210609   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.210806   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.210951   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.211140   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.211322   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:15.211488   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:15.211503   30630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-994751-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-994751-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-994751-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:19:15.321696   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:19:15.321728   30630 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:19:15.321748   30630 buildroot.go:174] setting up certificates
	I1004 03:19:15.321761   30630 provision.go:84] configureAuth start
	I1004 03:19:15.321773   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:19:15.322055   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:15.324655   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.325067   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.325090   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.325226   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.327479   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.327889   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.327929   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.328106   30630 provision.go:143] copyHostCerts
	I1004 03:19:15.328139   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:19:15.328171   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:19:15.328185   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:19:15.328272   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:19:15.328393   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:19:15.328420   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:19:15.328430   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:19:15.328468   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:19:15.328620   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:19:15.328652   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:19:15.328662   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:19:15.328718   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:19:15.328821   30630 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.ha-994751-m02 san=[127.0.0.1 192.168.39.117 ha-994751-m02 localhost minikube]
	I1004 03:19:15.560527   30630 provision.go:177] copyRemoteCerts
	I1004 03:19:15.560590   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:19:15.560612   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.563747   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.564236   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.564307   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.564520   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.564706   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.564861   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.565036   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:15.646851   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:19:15.646919   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 03:19:15.672945   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:19:15.673021   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:19:15.699880   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:19:15.699960   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:19:15.725929   30630 provision.go:87] duration metric: took 404.139584ms to configureAuth
	I1004 03:19:15.725975   30630 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:19:15.726189   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:19:15.726282   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.729150   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.729586   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.729623   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.729761   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.729951   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.730107   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.730283   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.730477   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:15.730682   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:15.730704   30630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:19:15.953783   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:19:15.953808   30630 main.go:141] libmachine: Checking connection to Docker...
	I1004 03:19:15.953817   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetURL
	I1004 03:19:15.955088   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Using libvirt version 6000000
	I1004 03:19:15.957213   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.957617   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.957642   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.957827   30630 main.go:141] libmachine: Docker is up and running!
	I1004 03:19:15.957841   30630 main.go:141] libmachine: Reticulating splines...
	I1004 03:19:15.957847   30630 client.go:171] duration metric: took 22.937783647s to LocalClient.Create
	I1004 03:19:15.957867   30630 start.go:167] duration metric: took 22.937832099s to libmachine.API.Create "ha-994751"
	I1004 03:19:15.957875   30630 start.go:293] postStartSetup for "ha-994751-m02" (driver="kvm2")
	I1004 03:19:15.957884   30630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:19:15.957899   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:15.958102   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:19:15.958124   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.960392   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.960717   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.960745   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.960883   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.961062   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.961225   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.961368   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:16.042404   30630 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:19:16.047363   30630 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:19:16.047388   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:19:16.047468   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:19:16.047535   30630 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:19:16.047546   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:19:16.047622   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:19:16.057062   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:19:16.082885   30630 start.go:296] duration metric: took 124.998047ms for postStartSetup
	I1004 03:19:16.082935   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetConfigRaw
	I1004 03:19:16.083581   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:16.086204   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.086582   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.086605   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.086841   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:19:16.087032   30630 start.go:128] duration metric: took 23.085132614s to createHost
	I1004 03:19:16.087053   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:16.089417   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.089782   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.089807   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.089984   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:16.090129   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.090241   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.090315   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:16.090436   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:16.090606   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:16.090615   30630 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:19:16.192923   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011956.165669680
	
	I1004 03:19:16.192949   30630 fix.go:216] guest clock: 1728011956.165669680
	I1004 03:19:16.192957   30630 fix.go:229] Guest: 2024-10-04 03:19:16.16566968 +0000 UTC Remote: 2024-10-04 03:19:16.08704226 +0000 UTC m=+70.399873263 (delta=78.62742ms)
	I1004 03:19:16.192972   30630 fix.go:200] guest clock delta is within tolerance: 78.62742ms
	I1004 03:19:16.192978   30630 start.go:83] releasing machines lock for "ha-994751-m02", held for 23.191201934s
	I1004 03:19:16.193000   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.193291   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:16.196268   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.196769   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.196799   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.199156   30630 out.go:177] * Found network options:
	I1004 03:19:16.200650   30630 out.go:177]   - NO_PROXY=192.168.39.65
	W1004 03:19:16.201984   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:19:16.202013   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.202608   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.202783   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.202904   30630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:19:16.202945   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	W1004 03:19:16.203033   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:19:16.203114   30630 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:19:16.203136   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:16.205729   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.205978   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.206109   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.206134   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.206286   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:16.206384   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.206425   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.206455   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.206610   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:16.206681   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:16.206748   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:16.206786   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.206947   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:16.207052   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:16.451088   30630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 03:19:16.457611   30630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:19:16.457679   30630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:19:16.474500   30630 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 03:19:16.474524   30630 start.go:495] detecting cgroup driver to use...
	I1004 03:19:16.474577   30630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:19:16.491337   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:19:16.505852   30630 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:19:16.505915   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:19:16.519394   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:19:16.533389   30630 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:19:16.647440   30630 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:19:16.796026   30630 docker.go:233] disabling docker service ...
	I1004 03:19:16.796090   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:19:16.810390   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:19:16.824447   30630 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:19:16.967078   30630 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:19:17.099949   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:19:17.114752   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:19:17.134460   30630 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:19:17.134514   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.144920   30630 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:19:17.144984   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.155252   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.165315   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.175583   30630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:19:17.186303   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.198678   30630 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.217975   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.229419   30630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:19:17.241337   30630 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 03:19:17.241386   30630 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 03:19:17.254390   30630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:19:17.264806   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:19:17.402028   30630 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:19:17.495758   30630 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:19:17.495841   30630 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:19:17.500623   30630 start.go:563] Will wait 60s for crictl version
	I1004 03:19:17.500678   30630 ssh_runner.go:195] Run: which crictl
	I1004 03:19:17.504705   30630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:19:17.550368   30630 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:19:17.550468   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:19:17.578910   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:19:17.612824   30630 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:19:17.614302   30630 out.go:177]   - env NO_PROXY=192.168.39.65
	I1004 03:19:17.615583   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:17.618499   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:17.619022   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:17.619049   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:17.619276   30630 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:19:17.623687   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:19:17.636797   30630 mustload.go:65] Loading cluster: ha-994751
	I1004 03:19:17.637003   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:19:17.637273   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:19:17.637322   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:19:17.651836   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38711
	I1004 03:19:17.652278   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:19:17.652784   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:19:17.652801   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:19:17.653111   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:19:17.653311   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:19:17.654878   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:19:17.655231   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:19:17.655273   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:19:17.669844   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I1004 03:19:17.670238   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:19:17.670702   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:19:17.670716   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:19:17.671055   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:19:17.671261   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:19:17.671448   30630 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751 for IP: 192.168.39.117
	I1004 03:19:17.671472   30630 certs.go:194] generating shared ca certs ...
	I1004 03:19:17.671486   30630 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:19:17.671619   30630 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:19:17.671665   30630 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:19:17.671678   30630 certs.go:256] generating profile certs ...
	I1004 03:19:17.671769   30630 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key
	I1004 03:19:17.671816   30630 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb
	I1004 03:19:17.671836   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65 192.168.39.117 192.168.39.254]
	I1004 03:19:17.982961   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb ...
	I1004 03:19:17.982990   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb: {Name:mka857c573044186dc7f952f5b2ab8a540e4e52f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:19:17.983170   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb ...
	I1004 03:19:17.983188   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb: {Name:mka872bfad80f36ccf6cfb0285b019b3212263dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:19:17.983268   30630 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt
	I1004 03:19:17.983413   30630 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key
	I1004 03:19:17.983593   30630 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key
	I1004 03:19:17.983610   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:19:17.983628   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:19:17.983649   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:19:17.983666   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:19:17.983685   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:19:17.983700   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:19:17.983717   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:19:17.983736   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:19:17.983821   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:19:17.983865   30630 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:19:17.983877   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:19:17.983909   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:19:17.983943   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:19:17.984054   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:19:17.984129   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:19:17.984175   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:19:17.984197   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:17.984216   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:19:17.984276   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:19:17.987517   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:17.987891   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:19:17.987919   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:17.988138   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:19:17.988361   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:19:17.988505   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:19:17.988670   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:19:18.060182   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1004 03:19:18.065324   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1004 03:19:18.078017   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1004 03:19:18.082669   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1004 03:19:18.094668   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1004 03:19:18.099036   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1004 03:19:18.110596   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1004 03:19:18.115397   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1004 03:19:18.126291   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1004 03:19:18.131864   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1004 03:19:18.143496   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1004 03:19:18.147678   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1004 03:19:18.158714   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:19:18.185425   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:19:18.212989   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:19:18.238721   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:19:18.265688   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1004 03:19:18.292564   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 03:19:18.318046   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:19:18.343621   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:19:18.367533   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:19:18.391460   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:19:18.414533   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:19:18.437881   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1004 03:19:18.454162   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1004 03:19:18.470435   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1004 03:19:18.487697   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1004 03:19:18.504422   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1004 03:19:18.521609   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1004 03:19:18.538712   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1004 03:19:18.555759   30630 ssh_runner.go:195] Run: openssl version
	I1004 03:19:18.561485   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:19:18.572838   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:19:18.578085   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:19:18.578150   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:19:18.584699   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:19:18.596515   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:19:18.608107   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:18.613090   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:18.613151   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:18.619060   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:19:18.630222   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:19:18.642211   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:19:18.646675   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:19:18.646733   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:19:18.652690   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:19:18.663892   30630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:19:18.668101   30630 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 03:19:18.668177   30630 kubeadm.go:934] updating node {m02 192.168.39.117 8443 v1.31.1 crio true true} ...
	I1004 03:19:18.668262   30630 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-994751-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:19:18.668287   30630 kube-vip.go:115] generating kube-vip config ...
	I1004 03:19:18.668368   30630 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1004 03:19:18.686599   30630 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1004 03:19:18.686662   30630 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:19:18.686715   30630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:19:18.697844   30630 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1004 03:19:18.697908   30630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1004 03:19:18.708942   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1004 03:19:18.708972   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:19:18.708991   30630 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1004 03:19:18.709028   30630 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1004 03:19:18.709031   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:19:18.713612   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1004 03:19:18.713636   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1004 03:19:19.809158   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:19:19.826203   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:19:19.826314   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:19:19.830837   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1004 03:19:19.830871   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1004 03:19:19.978327   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:19:19.978413   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:19:19.988543   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1004 03:19:19.988589   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1004 03:19:20.364768   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1004 03:19:20.374518   30630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1004 03:19:20.391501   30630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:19:20.408160   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1004 03:19:20.424511   30630 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:19:20.428280   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:19:20.439917   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:19:20.559800   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:19:20.576330   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:19:20.576654   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:19:20.576692   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:19:20.592425   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I1004 03:19:20.593014   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:19:20.593564   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:19:20.593590   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:19:20.593896   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:19:20.594067   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:19:20.594173   30630 start.go:317] joinCluster: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:19:20.594288   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1004 03:19:20.594307   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:19:20.597288   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:20.597706   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:19:20.597738   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:20.597851   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:19:20.598146   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:19:20.598359   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:19:20.598601   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:19:20.751261   30630 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:19:20.751313   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tfpvu2.gfmxns87jp8m6lea --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m02 --control-plane --apiserver-advertise-address=192.168.39.117 --apiserver-bind-port=8443"
	I1004 03:19:42.477327   30630 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tfpvu2.gfmxns87jp8m6lea --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m02 --control-plane --apiserver-advertise-address=192.168.39.117 --apiserver-bind-port=8443": (21.725989536s)
	I1004 03:19:42.477374   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1004 03:19:43.011388   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-994751-m02 minikube.k8s.io/updated_at=2024_10_04T03_19_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-994751 minikube.k8s.io/primary=false
	I1004 03:19:43.128289   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-994751-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1004 03:19:43.240778   30630 start.go:319] duration metric: took 22.646600164s to joinCluster
	I1004 03:19:43.240848   30630 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:19:43.241147   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:19:43.242449   30630 out.go:177] * Verifying Kubernetes components...
	I1004 03:19:43.243651   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:19:43.505854   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:19:43.526989   30630 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:19:43.527348   30630 kapi.go:59] client config for ha-994751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key", CAFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1004 03:19:43.527435   30630 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.65:8443
	I1004 03:19:43.527706   30630 node_ready.go:35] waiting up to 6m0s for node "ha-994751-m02" to be "Ready" ...
	I1004 03:19:43.527836   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:43.527848   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:43.527859   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:43.527864   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:43.538086   30630 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1004 03:19:44.028570   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:44.028592   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:44.028599   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:44.028604   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:44.034683   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:19:44.528680   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:44.528707   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:44.528719   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:44.528727   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:44.532210   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:45.028095   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:45.028116   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:45.028124   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:45.028128   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:45.031650   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:45.528659   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:45.528681   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:45.528689   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:45.528693   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:45.532032   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:45.532726   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:46.028184   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:46.028208   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:46.028220   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:46.028224   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:46.031876   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:46.528850   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:46.528870   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:46.528878   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:46.528883   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:46.532535   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:47.028593   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:47.028614   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:47.028622   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:47.028625   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:47.032488   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:47.528380   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:47.528406   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:47.528417   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:47.528423   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:47.532834   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:47.533292   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:48.028846   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:48.028866   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:48.028876   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:48.028879   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:48.033387   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:48.527941   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:48.527965   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:48.527976   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:48.527982   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:48.531255   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:49.027941   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:49.027974   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:49.027982   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:49.027985   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:49.032078   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:49.527942   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:49.527977   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:49.527988   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:49.527993   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:49.531336   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:50.027938   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:50.027962   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:50.027970   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:50.027975   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:50.031574   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:50.032261   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:50.528731   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:50.528756   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:50.528762   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:50.528766   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:50.533072   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:51.028280   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:51.028305   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:51.028315   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:51.028318   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:51.031958   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:51.527942   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:51.527963   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:51.527971   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:51.527975   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:51.531671   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:52.028715   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:52.028739   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:52.028747   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:52.028752   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:52.032273   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:52.032782   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:52.528521   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:52.528543   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:52.528553   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:52.528556   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:52.532328   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:53.028497   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:53.028519   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:53.028533   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:53.028536   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:53.031845   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:53.527963   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:53.527986   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:53.527995   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:53.527999   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:53.531468   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:54.028502   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:54.028524   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:54.028533   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:54.028537   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:54.032380   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:54.032974   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:54.528253   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:54.528276   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:54.528286   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:54.528293   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:54.531649   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:55.028786   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:55.028804   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:55.028812   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:55.028817   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:55.032371   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:55.527931   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:55.527953   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:55.527961   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:55.527965   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:55.531477   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:56.028492   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:56.028512   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:56.028519   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:56.028524   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:56.031319   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:19:56.527963   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:56.527981   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:56.527990   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:56.527993   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:56.531347   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:56.531854   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:57.027943   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:57.027962   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:57.027970   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:57.027979   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:57.031176   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:57.527972   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:57.527995   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:57.528006   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:57.528011   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:57.531355   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:58.028084   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:58.028103   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:58.028111   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:58.028115   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:58.034080   30630 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 03:19:58.527939   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:58.527959   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:58.527967   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:58.527972   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:58.530892   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:19:59.027908   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:59.027929   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:59.027938   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:59.027943   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:59.031093   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:59.031750   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:59.528117   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:59.528140   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:59.528148   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:59.528152   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:59.531338   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.027934   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:00.027956   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.027964   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.027968   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.031243   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.527969   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:00.527990   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.527998   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.528002   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.535322   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:20:00.536101   30630 node_ready.go:49] node "ha-994751-m02" has status "Ready":"True"
	I1004 03:20:00.536141   30630 node_ready.go:38] duration metric: took 17.008396711s for node "ha-994751-m02" to be "Ready" ...
	I1004 03:20:00.536154   30630 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:20:00.536255   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:00.536269   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.536281   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.536287   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.550231   30630 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1004 03:20:00.558943   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.559041   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-l6zst
	I1004 03:20:00.559052   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.559063   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.559071   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.562462   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.563534   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.563551   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.563558   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.563562   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.566458   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.567373   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.567390   30630 pod_ready.go:82] duration metric: took 8.418573ms for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.567399   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.567443   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgdck
	I1004 03:20:00.567450   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.567457   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.567461   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.571010   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.572015   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.572028   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.572035   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.572040   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.574144   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.574637   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.574653   30630 pod_ready.go:82] duration metric: took 7.248385ms for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.574660   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.574701   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751
	I1004 03:20:00.574708   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.574714   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.574718   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.577426   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.578237   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.578256   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.578262   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.578268   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.581297   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.582104   30630 pod_ready.go:93] pod "etcd-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.582124   30630 pod_ready.go:82] duration metric: took 7.457921ms for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.582136   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.582194   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m02
	I1004 03:20:00.582206   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.582213   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.582218   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.584954   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.586074   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:00.586089   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.586096   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.586098   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.588315   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.588797   30630 pod_ready.go:93] pod "etcd-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.588819   30630 pod_ready.go:82] duration metric: took 6.675728ms for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.588836   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.728447   30630 request.go:632] Waited for 139.544334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:20:00.728509   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:20:00.728514   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.728522   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.728527   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.732242   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.928492   30630 request.go:632] Waited for 195.478493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.928550   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.928556   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.928563   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.928567   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.932014   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.932660   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.932680   30630 pod_ready.go:82] duration metric: took 343.837498ms for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.932690   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.128708   30630 request.go:632] Waited for 195.949159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:20:01.128769   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:20:01.128778   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.128786   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.128790   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.131924   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.328936   30630 request.go:632] Waited for 196.247417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:01.328982   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:01.328986   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.328993   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.328999   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.332116   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.332718   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:01.332735   30630 pod_ready.go:82] duration metric: took 400.039408ms for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.332744   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.528985   30630 request.go:632] Waited for 196.178172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:20:01.529051   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:20:01.529057   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.529064   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.529068   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.532813   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.728751   30630 request.go:632] Waited for 195.374296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:01.728822   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:01.728828   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.728835   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.728838   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.732685   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.733267   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:01.733284   30630 pod_ready.go:82] duration metric: took 400.533757ms for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.733292   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.928444   30630 request.go:632] Waited for 195.093384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:20:01.928511   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:20:01.928517   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.928523   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.928531   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.931659   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.128724   30630 request.go:632] Waited for 196.347214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.128778   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.128783   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.128789   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.128794   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.132222   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.132803   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:02.132822   30630 pod_ready.go:82] duration metric: took 399.524177ms for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.132832   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.328210   30630 request.go:632] Waited for 195.309099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:20:02.328274   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:20:02.328281   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.328288   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.328293   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.331313   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.528409   30630 request.go:632] Waited for 196.390078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:02.528468   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:02.528474   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.528481   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.528486   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.531912   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.532422   30630 pod_ready.go:93] pod "kube-proxy-f44b9" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:02.532446   30630 pod_ready.go:82] duration metric: took 399.600972ms for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.532455   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.728449   30630 request.go:632] Waited for 195.932314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:20:02.728525   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:20:02.728531   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.728539   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.728547   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.732138   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.928159   30630 request.go:632] Waited for 195.316789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.928222   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.928227   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.928234   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.928238   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.931607   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.932124   30630 pod_ready.go:93] pod "kube-proxy-ph6cf" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:02.932148   30630 pod_ready.go:82] duration metric: took 399.687611ms for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.932157   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.128514   30630 request.go:632] Waited for 196.295312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:20:03.128566   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:20:03.128571   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.128579   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.128585   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.131954   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.328958   30630 request.go:632] Waited for 196.406685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:03.329017   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:03.329023   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.329031   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.329039   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.332357   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.332971   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:03.332988   30630 pod_ready.go:82] duration metric: took 400.824355ms for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.332997   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.528105   30630 request.go:632] Waited for 195.029512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:20:03.528157   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:20:03.528162   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.528169   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.528173   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.531733   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.727947   30630 request.go:632] Waited for 195.304105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:03.728022   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:03.728029   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.728038   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.728046   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.731222   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.731799   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:03.731823   30630 pod_ready.go:82] duration metric: took 398.818433ms for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.731836   30630 pod_ready.go:39] duration metric: took 3.195663558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:20:03.731854   30630 api_server.go:52] waiting for apiserver process to appear ...
	I1004 03:20:03.731914   30630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:20:03.748156   30630 api_server.go:72] duration metric: took 20.507274316s to wait for apiserver process to appear ...
	I1004 03:20:03.748186   30630 api_server.go:88] waiting for apiserver healthz status ...
	I1004 03:20:03.748208   30630 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I1004 03:20:03.752562   30630 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I1004 03:20:03.752615   30630 round_trippers.go:463] GET https://192.168.39.65:8443/version
	I1004 03:20:03.752620   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.752627   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.752633   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.753368   30630 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1004 03:20:03.753569   30630 api_server.go:141] control plane version: v1.31.1
	I1004 03:20:03.753592   30630 api_server.go:131] duration metric: took 5.397003ms to wait for apiserver health ...
	I1004 03:20:03.753601   30630 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 03:20:03.928947   30630 request.go:632] Waited for 175.282043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:03.929032   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:03.929040   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.929049   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.929055   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.934063   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:20:03.938318   30630 system_pods.go:59] 17 kube-system pods found
	I1004 03:20:03.938350   30630 system_pods.go:61] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:20:03.938358   30630 system_pods.go:61] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:20:03.938363   30630 system_pods.go:61] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:20:03.938369   30630 system_pods.go:61] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:20:03.938373   30630 system_pods.go:61] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:20:03.938378   30630 system_pods.go:61] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:20:03.938383   30630 system_pods.go:61] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:20:03.938387   30630 system_pods.go:61] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:20:03.938392   30630 system_pods.go:61] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:20:03.938397   30630 system_pods.go:61] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:20:03.938402   30630 system_pods.go:61] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:20:03.938408   30630 system_pods.go:61] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:20:03.938416   30630 system_pods.go:61] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:20:03.938422   30630 system_pods.go:61] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:20:03.938430   30630 system_pods.go:61] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:20:03.938435   30630 system_pods.go:61] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:20:03.938440   30630 system_pods.go:61] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:20:03.938450   30630 system_pods.go:74] duration metric: took 184.842668ms to wait for pod list to return data ...
	I1004 03:20:03.938469   30630 default_sa.go:34] waiting for default service account to be created ...
	I1004 03:20:04.128894   30630 request.go:632] Waited for 190.327691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:20:04.128944   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:20:04.128949   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:04.128956   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:04.128960   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:04.132905   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:04.133105   30630 default_sa.go:45] found service account: "default"
	I1004 03:20:04.133122   30630 default_sa.go:55] duration metric: took 194.645917ms for default service account to be created ...
	I1004 03:20:04.133132   30630 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 03:20:04.328598   30630 request.go:632] Waited for 195.393579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:04.328702   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:04.328730   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:04.328744   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:04.328753   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:04.333188   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:20:04.337805   30630 system_pods.go:86] 17 kube-system pods found
	I1004 03:20:04.337832   30630 system_pods.go:89] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:20:04.337838   30630 system_pods.go:89] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:20:04.337842   30630 system_pods.go:89] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:20:04.337848   30630 system_pods.go:89] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:20:04.337851   30630 system_pods.go:89] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:20:04.337855   30630 system_pods.go:89] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:20:04.337859   30630 system_pods.go:89] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:20:04.337863   30630 system_pods.go:89] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:20:04.337867   30630 system_pods.go:89] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:20:04.337874   30630 system_pods.go:89] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:20:04.337878   30630 system_pods.go:89] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:20:04.337885   30630 system_pods.go:89] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:20:04.337889   30630 system_pods.go:89] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:20:04.337901   30630 system_pods.go:89] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:20:04.337904   30630 system_pods.go:89] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:20:04.337907   30630 system_pods.go:89] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:20:04.337912   30630 system_pods.go:89] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:20:04.337921   30630 system_pods.go:126] duration metric: took 204.78361ms to wait for k8s-apps to be running ...
	I1004 03:20:04.337930   30630 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 03:20:04.337975   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:20:04.352705   30630 system_svc.go:56] duration metric: took 14.766178ms WaitForService to wait for kubelet
	I1004 03:20:04.352728   30630 kubeadm.go:582] duration metric: took 21.111850874s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:20:04.352744   30630 node_conditions.go:102] verifying NodePressure condition ...
	I1004 03:20:04.528049   30630 request.go:632] Waited for 175.240806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes
	I1004 03:20:04.528140   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes
	I1004 03:20:04.528148   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:04.528158   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:04.528166   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:04.532040   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:04.532645   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:20:04.532668   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:20:04.532682   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:20:04.532689   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:20:04.532696   30630 node_conditions.go:105] duration metric: took 179.947049ms to run NodePressure ...
	I1004 03:20:04.532711   30630 start.go:241] waiting for startup goroutines ...
	I1004 03:20:04.532748   30630 start.go:255] writing updated cluster config ...
	I1004 03:20:04.534798   30630 out.go:201] 
	I1004 03:20:04.536250   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:04.536346   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:20:04.537713   30630 out.go:177] * Starting "ha-994751-m03" control-plane node in "ha-994751" cluster
	I1004 03:20:04.538772   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:20:04.538791   30630 cache.go:56] Caching tarball of preloaded images
	I1004 03:20:04.538881   30630 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:20:04.538892   30630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:20:04.538970   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:20:04.539124   30630 start.go:360] acquireMachinesLock for ha-994751-m03: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:20:04.539179   30630 start.go:364] duration metric: took 32.772µs to acquireMachinesLock for "ha-994751-m03"
	I1004 03:20:04.539202   30630 start.go:93] Provisioning new machine with config: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:20:04.539327   30630 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1004 03:20:04.540776   30630 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 03:20:04.540857   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:04.540889   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:04.555427   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	I1004 03:20:04.555831   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:04.556364   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:04.556394   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:04.556738   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:04.556921   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:04.557038   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:04.557175   30630 start.go:159] libmachine.API.Create for "ha-994751" (driver="kvm2")
	I1004 03:20:04.557204   30630 client.go:168] LocalClient.Create starting
	I1004 03:20:04.557233   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 03:20:04.557271   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:20:04.557291   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:20:04.557375   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 03:20:04.557421   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:20:04.557449   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:20:04.557481   30630 main.go:141] libmachine: Running pre-create checks...
	I1004 03:20:04.557495   30630 main.go:141] libmachine: (ha-994751-m03) Calling .PreCreateCheck
	I1004 03:20:04.557705   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetConfigRaw
	I1004 03:20:04.558081   30630 main.go:141] libmachine: Creating machine...
	I1004 03:20:04.558096   30630 main.go:141] libmachine: (ha-994751-m03) Calling .Create
	I1004 03:20:04.558257   30630 main.go:141] libmachine: (ha-994751-m03) Creating KVM machine...
	I1004 03:20:04.559668   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found existing default KVM network
	I1004 03:20:04.559869   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found existing private KVM network mk-ha-994751
	I1004 03:20:04.560039   30630 main.go:141] libmachine: (ha-994751-m03) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03 ...
	I1004 03:20:04.560065   30630 main.go:141] libmachine: (ha-994751-m03) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 03:20:04.560110   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:04.560016   31400 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:20:04.560192   30630 main.go:141] libmachine: (ha-994751-m03) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 03:20:04.808276   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:04.808145   31400 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa...
	I1004 03:20:05.005812   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:05.005703   31400 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/ha-994751-m03.rawdisk...
	I1004 03:20:05.005838   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Writing magic tar header
	I1004 03:20:05.005848   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Writing SSH key tar header
	I1004 03:20:05.005856   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:05.005807   31400 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03 ...
	I1004 03:20:05.005932   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03
	I1004 03:20:05.005971   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 03:20:05.006001   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03 (perms=drwx------)
	I1004 03:20:05.006011   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:20:05.006021   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 03:20:05.006034   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 03:20:05.006047   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 03:20:05.006063   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 03:20:05.006075   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 03:20:05.006086   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 03:20:05.006100   30630 main.go:141] libmachine: (ha-994751-m03) Creating domain...
	I1004 03:20:05.006109   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 03:20:05.006122   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins
	I1004 03:20:05.006135   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home
	I1004 03:20:05.006147   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Skipping /home - not owner
	I1004 03:20:05.007092   30630 main.go:141] libmachine: (ha-994751-m03) define libvirt domain using xml: 
	I1004 03:20:05.007116   30630 main.go:141] libmachine: (ha-994751-m03) <domain type='kvm'>
	I1004 03:20:05.007126   30630 main.go:141] libmachine: (ha-994751-m03)   <name>ha-994751-m03</name>
	I1004 03:20:05.007139   30630 main.go:141] libmachine: (ha-994751-m03)   <memory unit='MiB'>2200</memory>
	I1004 03:20:05.007151   30630 main.go:141] libmachine: (ha-994751-m03)   <vcpu>2</vcpu>
	I1004 03:20:05.007158   30630 main.go:141] libmachine: (ha-994751-m03)   <features>
	I1004 03:20:05.007166   30630 main.go:141] libmachine: (ha-994751-m03)     <acpi/>
	I1004 03:20:05.007173   30630 main.go:141] libmachine: (ha-994751-m03)     <apic/>
	I1004 03:20:05.007177   30630 main.go:141] libmachine: (ha-994751-m03)     <pae/>
	I1004 03:20:05.007183   30630 main.go:141] libmachine: (ha-994751-m03)     
	I1004 03:20:05.007189   30630 main.go:141] libmachine: (ha-994751-m03)   </features>
	I1004 03:20:05.007198   30630 main.go:141] libmachine: (ha-994751-m03)   <cpu mode='host-passthrough'>
	I1004 03:20:05.007205   30630 main.go:141] libmachine: (ha-994751-m03)   
	I1004 03:20:05.007209   30630 main.go:141] libmachine: (ha-994751-m03)   </cpu>
	I1004 03:20:05.007231   30630 main.go:141] libmachine: (ha-994751-m03)   <os>
	I1004 03:20:05.007247   30630 main.go:141] libmachine: (ha-994751-m03)     <type>hvm</type>
	I1004 03:20:05.007256   30630 main.go:141] libmachine: (ha-994751-m03)     <boot dev='cdrom'/>
	I1004 03:20:05.007270   30630 main.go:141] libmachine: (ha-994751-m03)     <boot dev='hd'/>
	I1004 03:20:05.007282   30630 main.go:141] libmachine: (ha-994751-m03)     <bootmenu enable='no'/>
	I1004 03:20:05.007301   30630 main.go:141] libmachine: (ha-994751-m03)   </os>
	I1004 03:20:05.007312   30630 main.go:141] libmachine: (ha-994751-m03)   <devices>
	I1004 03:20:05.007323   30630 main.go:141] libmachine: (ha-994751-m03)     <disk type='file' device='cdrom'>
	I1004 03:20:05.007339   30630 main.go:141] libmachine: (ha-994751-m03)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/boot2docker.iso'/>
	I1004 03:20:05.007353   30630 main.go:141] libmachine: (ha-994751-m03)       <target dev='hdc' bus='scsi'/>
	I1004 03:20:05.007365   30630 main.go:141] libmachine: (ha-994751-m03)       <readonly/>
	I1004 03:20:05.007373   30630 main.go:141] libmachine: (ha-994751-m03)     </disk>
	I1004 03:20:05.007385   30630 main.go:141] libmachine: (ha-994751-m03)     <disk type='file' device='disk'>
	I1004 03:20:05.007397   30630 main.go:141] libmachine: (ha-994751-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 03:20:05.007412   30630 main.go:141] libmachine: (ha-994751-m03)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/ha-994751-m03.rawdisk'/>
	I1004 03:20:05.007427   30630 main.go:141] libmachine: (ha-994751-m03)       <target dev='hda' bus='virtio'/>
	I1004 03:20:05.007439   30630 main.go:141] libmachine: (ha-994751-m03)     </disk>
	I1004 03:20:05.007448   30630 main.go:141] libmachine: (ha-994751-m03)     <interface type='network'>
	I1004 03:20:05.007465   30630 main.go:141] libmachine: (ha-994751-m03)       <source network='mk-ha-994751'/>
	I1004 03:20:05.007474   30630 main.go:141] libmachine: (ha-994751-m03)       <model type='virtio'/>
	I1004 03:20:05.007484   30630 main.go:141] libmachine: (ha-994751-m03)     </interface>
	I1004 03:20:05.007498   30630 main.go:141] libmachine: (ha-994751-m03)     <interface type='network'>
	I1004 03:20:05.007510   30630 main.go:141] libmachine: (ha-994751-m03)       <source network='default'/>
	I1004 03:20:05.007520   30630 main.go:141] libmachine: (ha-994751-m03)       <model type='virtio'/>
	I1004 03:20:05.007530   30630 main.go:141] libmachine: (ha-994751-m03)     </interface>
	I1004 03:20:05.007540   30630 main.go:141] libmachine: (ha-994751-m03)     <serial type='pty'>
	I1004 03:20:05.007550   30630 main.go:141] libmachine: (ha-994751-m03)       <target port='0'/>
	I1004 03:20:05.007559   30630 main.go:141] libmachine: (ha-994751-m03)     </serial>
	I1004 03:20:05.007576   30630 main.go:141] libmachine: (ha-994751-m03)     <console type='pty'>
	I1004 03:20:05.007591   30630 main.go:141] libmachine: (ha-994751-m03)       <target type='serial' port='0'/>
	I1004 03:20:05.007600   30630 main.go:141] libmachine: (ha-994751-m03)     </console>
	I1004 03:20:05.007608   30630 main.go:141] libmachine: (ha-994751-m03)     <rng model='virtio'>
	I1004 03:20:05.007614   30630 main.go:141] libmachine: (ha-994751-m03)       <backend model='random'>/dev/random</backend>
	I1004 03:20:05.007620   30630 main.go:141] libmachine: (ha-994751-m03)     </rng>
	I1004 03:20:05.007628   30630 main.go:141] libmachine: (ha-994751-m03)     
	I1004 03:20:05.007636   30630 main.go:141] libmachine: (ha-994751-m03)     
	I1004 03:20:05.007652   30630 main.go:141] libmachine: (ha-994751-m03)   </devices>
	I1004 03:20:05.007666   30630 main.go:141] libmachine: (ha-994751-m03) </domain>
	I1004 03:20:05.007678   30630 main.go:141] libmachine: (ha-994751-m03) 
	I1004 03:20:05.014475   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:d0:97:18 in network default
	I1004 03:20:05.015005   30630 main.go:141] libmachine: (ha-994751-m03) Ensuring networks are active...
	I1004 03:20:05.015041   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:05.015645   30630 main.go:141] libmachine: (ha-994751-m03) Ensuring network default is active
	I1004 03:20:05.015928   30630 main.go:141] libmachine: (ha-994751-m03) Ensuring network mk-ha-994751 is active
	I1004 03:20:05.016249   30630 main.go:141] libmachine: (ha-994751-m03) Getting domain xml...
	I1004 03:20:05.016929   30630 main.go:141] libmachine: (ha-994751-m03) Creating domain...
	I1004 03:20:06.261440   30630 main.go:141] libmachine: (ha-994751-m03) Waiting to get IP...
	I1004 03:20:06.262071   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:06.262414   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:06.262472   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:06.262421   31400 retry.go:31] will retry after 250.348601ms: waiting for machine to come up
	I1004 03:20:06.515070   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:06.515535   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:06.515565   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:06.515468   31400 retry.go:31] will retry after 243.422578ms: waiting for machine to come up
	I1004 03:20:06.760919   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:06.761413   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:06.761440   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:06.761366   31400 retry.go:31] will retry after 323.138496ms: waiting for machine to come up
	I1004 03:20:07.085754   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:07.086220   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:07.086254   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:07.086174   31400 retry.go:31] will retry after 589.608599ms: waiting for machine to come up
	I1004 03:20:07.676793   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:07.677255   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:07.677277   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:07.677220   31400 retry.go:31] will retry after 686.955192ms: waiting for machine to come up
	I1004 03:20:08.365977   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:08.366366   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:08.366390   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:08.366322   31400 retry.go:31] will retry after 861.927469ms: waiting for machine to come up
	I1004 03:20:09.229974   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:09.230402   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:09.230431   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:09.230354   31400 retry.go:31] will retry after 766.03024ms: waiting for machine to come up
	I1004 03:20:09.997533   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:09.997938   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:09.997963   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:09.997907   31400 retry.go:31] will retry after 980.127757ms: waiting for machine to come up
	I1004 03:20:10.979306   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:10.979718   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:10.979743   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:10.979684   31400 retry.go:31] will retry after 1.544904084s: waiting for machine to come up
	I1004 03:20:12.525854   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:12.526225   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:12.526249   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:12.526177   31400 retry.go:31] will retry after 1.432028005s: waiting for machine to come up
	I1004 03:20:13.960907   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:13.961388   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:13.961415   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:13.961367   31400 retry.go:31] will retry after 1.927604807s: waiting for machine to come up
	I1004 03:20:15.890697   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:15.891148   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:15.891175   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:15.891091   31400 retry.go:31] will retry after 3.506356031s: waiting for machine to come up
	I1004 03:20:19.400810   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:19.401322   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:19.401349   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:19.401272   31400 retry.go:31] will retry after 3.367410839s: waiting for machine to come up
	I1004 03:20:22.769867   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:22.770373   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:22.770407   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:22.770302   31400 retry.go:31] will retry after 5.266869096s: waiting for machine to come up
	I1004 03:20:28.041532   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:28.041995   30630 main.go:141] libmachine: (ha-994751-m03) Found IP for machine: 192.168.39.53
	I1004 03:20:28.042014   30630 main.go:141] libmachine: (ha-994751-m03) Reserving static IP address...
	I1004 03:20:28.042026   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has current primary IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:28.042375   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find host DHCP lease matching {name: "ha-994751-m03", mac: "52:54:00:49:76:ea", ip: "192.168.39.53"} in network mk-ha-994751
	I1004 03:20:28.115076   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Getting to WaitForSSH function...
	I1004 03:20:28.115105   30630 main.go:141] libmachine: (ha-994751-m03) Reserved static IP address: 192.168.39.53
	I1004 03:20:28.115145   30630 main.go:141] libmachine: (ha-994751-m03) Waiting for SSH to be available...
	I1004 03:20:28.117390   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:28.117662   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751
	I1004 03:20:28.117678   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find defined IP address of network mk-ha-994751 interface with MAC address 52:54:00:49:76:ea
	I1004 03:20:28.117841   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH client type: external
	I1004 03:20:28.117866   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa (-rw-------)
	I1004 03:20:28.117909   30630 main.go:141] libmachine: (ha-994751-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:20:28.117924   30630 main.go:141] libmachine: (ha-994751-m03) DBG | About to run SSH command:
	I1004 03:20:28.117940   30630 main.go:141] libmachine: (ha-994751-m03) DBG | exit 0
	I1004 03:20:28.121632   30630 main.go:141] libmachine: (ha-994751-m03) DBG | SSH cmd err, output: exit status 255: 
	I1004 03:20:28.121657   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1004 03:20:28.121668   30630 main.go:141] libmachine: (ha-994751-m03) DBG | command : exit 0
	I1004 03:20:28.121677   30630 main.go:141] libmachine: (ha-994751-m03) DBG | err     : exit status 255
	I1004 03:20:28.121690   30630 main.go:141] libmachine: (ha-994751-m03) DBG | output  : 
	I1004 03:20:31.123157   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Getting to WaitForSSH function...
	I1004 03:20:31.125515   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.125954   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.125981   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.126121   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH client type: external
	I1004 03:20:31.126148   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa (-rw-------)
	I1004 03:20:31.126175   30630 main.go:141] libmachine: (ha-994751-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:20:31.126186   30630 main.go:141] libmachine: (ha-994751-m03) DBG | About to run SSH command:
	I1004 03:20:31.126199   30630 main.go:141] libmachine: (ha-994751-m03) DBG | exit 0
	I1004 03:20:31.255788   30630 main.go:141] libmachine: (ha-994751-m03) DBG | SSH cmd err, output: <nil>: 
	I1004 03:20:31.256048   30630 main.go:141] libmachine: (ha-994751-m03) KVM machine creation complete!
	I1004 03:20:31.256416   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetConfigRaw
	I1004 03:20:31.257001   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:31.257196   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:31.257537   30630 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 03:20:31.257552   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetState
	I1004 03:20:31.258954   30630 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 03:20:31.258966   30630 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 03:20:31.258972   30630 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 03:20:31.258978   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.261065   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.261407   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.261432   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.261523   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.261696   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.261827   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.261939   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.262104   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.262338   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.262354   30630 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 03:20:31.371392   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:20:31.371421   30630 main.go:141] libmachine: Detecting the provisioner...
	I1004 03:20:31.371431   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.374360   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.374677   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.374703   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.374874   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.375093   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.375299   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.375463   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.375637   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.375858   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.375873   30630 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 03:20:31.489043   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 03:20:31.489093   30630 main.go:141] libmachine: found compatible host: buildroot
	I1004 03:20:31.489100   30630 main.go:141] libmachine: Provisioning with buildroot...
	I1004 03:20:31.489107   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:31.489333   30630 buildroot.go:166] provisioning hostname "ha-994751-m03"
	I1004 03:20:31.489357   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:31.489534   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.492101   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.492553   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.492573   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.492727   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.492907   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.493039   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.493147   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.493277   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.493442   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.493453   30630 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-994751-m03 && echo "ha-994751-m03" | sudo tee /etc/hostname
	I1004 03:20:31.626029   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751-m03
	
	I1004 03:20:31.626058   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.628598   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.629032   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.629055   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.629247   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.629454   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.629599   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.629757   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.629901   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.630075   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.630108   30630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-994751-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-994751-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-994751-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:20:31.754855   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:20:31.754886   30630 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:20:31.754923   30630 buildroot.go:174] setting up certificates
	I1004 03:20:31.754934   30630 provision.go:84] configureAuth start
	I1004 03:20:31.754946   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:31.755194   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:31.757747   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.758065   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.758087   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.758193   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.760414   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.760746   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.760771   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.760844   30630 provision.go:143] copyHostCerts
	I1004 03:20:31.760875   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:20:31.760907   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:20:31.760915   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:20:31.760984   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:20:31.761064   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:20:31.761082   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:20:31.761088   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:20:31.761114   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:20:31.761166   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:20:31.761182   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:20:31.761188   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:20:31.761214   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:20:31.761271   30630 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.ha-994751-m03 san=[127.0.0.1 192.168.39.53 ha-994751-m03 localhost minikube]
	I1004 03:20:31.828214   30630 provision.go:177] copyRemoteCerts
	I1004 03:20:31.828263   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:20:31.828283   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.830707   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.831047   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.831078   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.831192   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.831375   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.831522   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.831636   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:31.917792   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:20:31.917859   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:20:31.943534   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:20:31.943606   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 03:20:31.968990   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:20:31.969060   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:20:31.992331   30630 provision.go:87] duration metric: took 237.384107ms to configureAuth
	I1004 03:20:31.992362   30630 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:20:31.992622   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:31.992738   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.995570   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.995946   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.995975   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.996126   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.996306   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.996434   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.996569   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.996677   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.996863   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.996880   30630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:20:32.229026   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:20:32.229061   30630 main.go:141] libmachine: Checking connection to Docker...
	I1004 03:20:32.229071   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetURL
	I1004 03:20:32.230237   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using libvirt version 6000000
	I1004 03:20:32.232533   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.232839   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.232870   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.233012   30630 main.go:141] libmachine: Docker is up and running!
	I1004 03:20:32.233029   30630 main.go:141] libmachine: Reticulating splines...
	I1004 03:20:32.233037   30630 client.go:171] duration metric: took 27.675822366s to LocalClient.Create
	I1004 03:20:32.233061   30630 start.go:167] duration metric: took 27.675885367s to libmachine.API.Create "ha-994751"
	I1004 03:20:32.233071   30630 start.go:293] postStartSetup for "ha-994751-m03" (driver="kvm2")
	I1004 03:20:32.233080   30630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:20:32.233096   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.233315   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:20:32.233341   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:32.235889   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.236270   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.236297   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.236452   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.236641   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.236787   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.236936   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:32.321827   30630 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:20:32.326129   30630 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:20:32.326152   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:20:32.326232   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:20:32.326328   30630 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:20:32.326339   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:20:32.326421   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:20:32.336376   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:20:32.359653   30630 start.go:296] duration metric: took 126.571809ms for postStartSetup
	I1004 03:20:32.359721   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetConfigRaw
	I1004 03:20:32.360268   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:32.362856   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.363243   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.363268   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.363469   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:20:32.363663   30630 start.go:128] duration metric: took 27.824325438s to createHost
	I1004 03:20:32.363686   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:32.365882   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.366210   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.366226   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.366350   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.366523   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.366674   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.366824   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.366985   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:32.367180   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:32.367194   30630 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:20:32.480703   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728012032.461011085
	
	I1004 03:20:32.480725   30630 fix.go:216] guest clock: 1728012032.461011085
	I1004 03:20:32.480735   30630 fix.go:229] Guest: 2024-10-04 03:20:32.461011085 +0000 UTC Remote: 2024-10-04 03:20:32.363675 +0000 UTC m=+146.676506004 (delta=97.336085ms)
	I1004 03:20:32.480753   30630 fix.go:200] guest clock delta is within tolerance: 97.336085ms
	I1004 03:20:32.480760   30630 start.go:83] releasing machines lock for "ha-994751-m03", held for 27.941569364s
	I1004 03:20:32.480780   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.480989   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:32.483796   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.484159   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.484191   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.486391   30630 out.go:177] * Found network options:
	I1004 03:20:32.487654   30630 out.go:177]   - NO_PROXY=192.168.39.65,192.168.39.117
	W1004 03:20:32.488913   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	W1004 03:20:32.488946   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:20:32.488964   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.489521   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.489776   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.489869   30630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:20:32.489906   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	W1004 03:20:32.489985   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	W1004 03:20:32.490009   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:20:32.490068   30630 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:20:32.490090   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:32.492646   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.492900   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.493125   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.493149   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.493245   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.493267   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.493334   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.493500   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.493556   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.493707   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.493736   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.493920   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:32.493987   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.494105   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:32.742057   30630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 03:20:32.749338   30630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:20:32.749392   30630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:20:32.765055   30630 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 03:20:32.765079   30630 start.go:495] detecting cgroup driver to use...
	I1004 03:20:32.765139   30630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:20:32.780546   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:20:32.797729   30630 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:20:32.797789   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:20:32.810917   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:20:32.823880   30630 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:20:32.941749   30630 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:20:33.094803   30630 docker.go:233] disabling docker service ...
	I1004 03:20:33.094875   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:20:33.108945   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:20:33.122238   30630 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:20:33.259499   30630 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:20:33.382162   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:20:33.399956   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:20:33.419077   30630 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:20:33.419147   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.431123   30630 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:20:33.431176   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.442393   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.454523   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.465583   30630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:20:33.477059   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.487953   30630 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.505077   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.515522   30630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:20:33.526537   30630 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 03:20:33.526592   30630 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 03:20:33.540307   30630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:20:33.550485   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:20:33.660459   30630 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:20:33.759769   30630 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:20:33.759862   30630 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:20:33.764677   30630 start.go:563] Will wait 60s for crictl version
	I1004 03:20:33.764728   30630 ssh_runner.go:195] Run: which crictl
	I1004 03:20:33.768748   30630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:20:33.815756   30630 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:20:33.815849   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:20:33.843604   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:20:33.875395   30630 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:20:33.876902   30630 out.go:177]   - env NO_PROXY=192.168.39.65
	I1004 03:20:33.878202   30630 out.go:177]   - env NO_PROXY=192.168.39.65,192.168.39.117
	I1004 03:20:33.879354   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:33.881763   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:33.882075   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:33.882116   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:33.882282   30630 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:20:33.887016   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:20:33.900617   30630 mustload.go:65] Loading cluster: ha-994751
	I1004 03:20:33.900859   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:33.901101   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:33.901139   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:33.916080   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40599
	I1004 03:20:33.916545   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:33.917019   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:33.917038   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:33.917311   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:33.917490   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:20:33.918758   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:20:33.919091   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:33.919127   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:33.934895   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I1004 03:20:33.935369   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:33.935847   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:33.935870   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:33.936191   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:33.936373   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:20:33.936519   30630 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751 for IP: 192.168.39.53
	I1004 03:20:33.936531   30630 certs.go:194] generating shared ca certs ...
	I1004 03:20:33.936550   30630 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:20:33.936692   30630 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:20:33.936742   30630 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:20:33.936754   30630 certs.go:256] generating profile certs ...
	I1004 03:20:33.936848   30630 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key
	I1004 03:20:33.936877   30630 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21
	I1004 03:20:33.936895   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65 192.168.39.117 192.168.39.53 192.168.39.254]
	I1004 03:20:34.019919   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21 ...
	I1004 03:20:34.019948   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21: {Name:mk35ee00bf994088c6b50391189f3e324fc0101b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:20:34.020103   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21 ...
	I1004 03:20:34.020114   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21: {Name:mk408ba3330d2e90d98d309cc86d9e5e670f9570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:20:34.020180   30630 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt
	I1004 03:20:34.020296   30630 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key
	I1004 03:20:34.020411   30630 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key
	I1004 03:20:34.020425   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:20:34.020438   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:20:34.020452   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:20:34.020465   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:20:34.020477   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:20:34.020489   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:20:34.020501   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:20:34.035820   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:20:34.035890   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:20:34.035926   30630 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:20:34.035946   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:20:34.035969   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:20:34.035990   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:20:34.036010   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:20:34.036045   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:20:34.036074   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.036087   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.036100   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.036130   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:20:34.039080   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:34.039469   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:20:34.039485   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:34.039662   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:20:34.039893   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:20:34.040036   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:20:34.040151   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:20:34.112207   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1004 03:20:34.117935   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1004 03:20:34.131114   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1004 03:20:34.136170   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1004 03:20:34.149066   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1004 03:20:34.153717   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1004 03:20:34.167750   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1004 03:20:34.172288   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1004 03:20:34.184761   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1004 03:20:34.189707   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1004 03:20:34.201792   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1004 03:20:34.206305   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1004 03:20:34.218091   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:20:34.243235   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:20:34.267642   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:20:34.291741   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:20:34.317056   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1004 03:20:34.340832   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 03:20:34.364951   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:20:34.392565   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:20:34.419461   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:20:34.444597   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:20:34.470026   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:20:34.495443   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1004 03:20:34.513085   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1004 03:20:34.530602   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1004 03:20:34.548064   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1004 03:20:34.565179   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1004 03:20:34.582199   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1004 03:20:34.599469   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1004 03:20:34.617008   30630 ssh_runner.go:195] Run: openssl version
	I1004 03:20:34.623238   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:20:34.635851   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.641242   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.641300   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.647354   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:20:34.660625   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:20:34.673563   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.678872   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.678918   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.685228   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:20:34.696965   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:20:34.708173   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.712666   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.712728   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.718347   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:20:34.729423   30630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:20:34.733599   30630 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 03:20:34.733645   30630 kubeadm.go:934] updating node {m03 192.168.39.53 8443 v1.31.1 crio true true} ...
	I1004 03:20:34.733734   30630 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-994751-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:20:34.733759   30630 kube-vip.go:115] generating kube-vip config ...
	I1004 03:20:34.733788   30630 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1004 03:20:34.753104   30630 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1004 03:20:34.753160   30630 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:20:34.753207   30630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:20:34.764605   30630 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1004 03:20:34.764653   30630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1004 03:20:34.776026   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1004 03:20:34.776058   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:20:34.776073   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1004 03:20:34.776077   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1004 03:20:34.776094   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:20:34.776111   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:20:34.776123   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:20:34.776154   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:20:34.784508   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1004 03:20:34.784532   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1004 03:20:34.784546   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1004 03:20:34.784554   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1004 03:20:34.816412   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:20:34.816537   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:20:34.932259   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1004 03:20:34.932304   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1004 03:20:35.665849   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1004 03:20:35.676114   30630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1004 03:20:35.694028   30630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:20:35.718864   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1004 03:20:35.736291   30630 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:20:35.740907   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:20:35.753115   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:20:35.870874   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:20:35.888175   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:20:35.888614   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:35.888675   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:35.903712   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33367
	I1004 03:20:35.904202   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:35.904676   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:35.904700   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:35.904994   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:35.905194   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:20:35.905357   30630 start.go:317] joinCluster: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:20:35.905474   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1004 03:20:35.905495   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:20:35.908275   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:35.908713   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:20:35.908739   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:35.908875   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:20:35.909047   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:20:35.909173   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:20:35.909303   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:20:36.083592   30630 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:20:36.083645   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e5abq7.epvk18yjfmjj0i7x --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I1004 03:20:57.688048   30630 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e5abq7.epvk18yjfmjj0i7x --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (21.604380186s)
	I1004 03:20:57.688081   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1004 03:20:58.272843   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-994751-m03 minikube.k8s.io/updated_at=2024_10_04T03_20_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-994751 minikube.k8s.io/primary=false
	I1004 03:20:58.405355   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-994751-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1004 03:20:58.529681   30630 start.go:319] duration metric: took 22.624319783s to joinCluster
	I1004 03:20:58.529762   30630 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:20:58.530014   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:58.531345   30630 out.go:177] * Verifying Kubernetes components...
	I1004 03:20:58.532710   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:20:58.800802   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:20:58.844203   30630 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:20:58.844571   30630 kapi.go:59] client config for ha-994751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key", CAFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1004 03:20:58.844645   30630 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.65:8443
	I1004 03:20:58.844892   30630 node_ready.go:35] waiting up to 6m0s for node "ha-994751-m03" to be "Ready" ...
	I1004 03:20:58.844972   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:20:58.844982   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:58.844998   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:58.845007   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:58.848088   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:59.345094   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:20:59.345120   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:59.345130   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:59.345135   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:59.353141   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:20:59.845733   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:20:59.845805   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:59.845823   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:59.845832   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:59.850171   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:00.345129   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:00.345150   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:00.345159   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:00.345163   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:00.348609   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:00.845173   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:00.845196   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:00.845205   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:00.845210   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:00.850207   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:00.851383   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:01.345051   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:01.345072   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:01.345079   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:01.345083   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:01.349207   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:01.845336   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:01.845357   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:01.845364   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:01.845369   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:01.848367   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:02.345495   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:02.345521   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:02.345529   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:02.345534   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:02.349838   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:02.845704   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:02.845732   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:02.845745   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:02.845752   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:02.849074   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:03.345450   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:03.345472   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:03.345480   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:03.345484   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:03.349082   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:03.349671   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:03.846035   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:03.846061   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:03.846072   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:03.846079   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:03.850455   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:04.345156   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:04.345183   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:04.345191   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:04.345196   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:04.349346   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:04.845676   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:04.845695   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:04.845702   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:04.845707   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:04.849977   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:05.345993   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:05.346019   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:05.346028   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:05.346032   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:05.350487   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:05.352077   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:05.845454   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:05.845473   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:05.845486   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:05.845493   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:05.848902   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:06.345394   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:06.345416   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:06.345424   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:06.345428   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:06.348963   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:06.846045   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:06.846066   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:06.846077   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:06.846084   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:06.849291   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:07.345224   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:07.345249   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:07.345258   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:07.345261   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:07.348950   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:07.845773   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:07.845797   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:07.845807   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:07.845812   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:07.853790   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:21:07.854460   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:08.345396   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:08.345417   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:08.345425   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:08.345430   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:08.348967   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:08.845960   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:08.845987   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:08.845998   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:08.846004   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:08.849592   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:09.345163   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:09.345187   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:09.345195   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:09.345199   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:09.348412   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:09.845700   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:09.845720   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:09.845727   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:09.845732   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:09.848850   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:10.346002   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:10.346024   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:10.346036   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:10.346041   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:10.349778   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:10.350421   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:10.845273   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:10.845342   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:10.845357   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:10.845364   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:10.849249   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:11.345450   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:11.345474   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:11.345485   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:11.345490   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:11.348615   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:11.845521   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:11.845544   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:11.845552   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:11.845557   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:11.851020   30630 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 03:21:12.345427   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:12.345455   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:12.345466   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:12.345473   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:12.348894   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:12.845773   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:12.845807   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:12.845815   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:12.845821   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:12.849096   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:12.849859   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:13.345600   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:13.345625   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:13.345635   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:13.345641   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:13.348986   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:13.845088   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:13.845115   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:13.845122   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:13.845126   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:13.848813   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:14.345772   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:14.345796   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:14.345804   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:14.345809   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:14.349538   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:14.845967   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:14.845999   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:14.846010   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:14.846015   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:14.849646   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:14.850106   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:15.345479   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:15.345501   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:15.345509   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:15.345514   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:15.348633   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:15.845308   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:15.845329   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:15.845337   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:15.845342   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:15.848613   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.345615   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:16.345635   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.345697   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.345709   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.349189   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.845211   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:16.845234   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.845243   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.845247   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.848314   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.848965   30630 node_ready.go:49] node "ha-994751-m03" has status "Ready":"True"
	I1004 03:21:16.848983   30630 node_ready.go:38] duration metric: took 18.004075427s for node "ha-994751-m03" to be "Ready" ...
	I1004 03:21:16.848993   30630 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:21:16.849057   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:16.849066   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.849073   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.849077   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.855878   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:21:16.863339   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.863413   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-l6zst
	I1004 03:21:16.863421   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.863428   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.863432   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.866627   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.867225   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:16.867246   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.867254   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.867257   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.869745   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.870174   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.870189   30630 pod_ready.go:82] duration metric: took 6.828744ms for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.870197   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.870257   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgdck
	I1004 03:21:16.870266   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.870272   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.870277   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.872665   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.873280   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:16.873293   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.873300   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.873304   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.875767   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.876277   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.876299   30630 pod_ready.go:82] duration metric: took 6.094854ms for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.876312   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.876381   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751
	I1004 03:21:16.876394   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.876405   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.876415   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.878641   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.879297   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:16.879315   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.879323   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.879330   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.881505   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.881911   30630 pod_ready.go:93] pod "etcd-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.881925   30630 pod_ready.go:82] duration metric: took 5.606429ms for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.881933   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.881973   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m02
	I1004 03:21:16.881980   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.881986   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.881991   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.884217   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.884882   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:16.884896   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.884903   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.884907   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.887109   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.887576   30630 pod_ready.go:93] pod "etcd-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.887592   30630 pod_ready.go:82] duration metric: took 5.65336ms for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.887600   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.046004   30630 request.go:632] Waited for 158.354973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m03
	I1004 03:21:17.046081   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m03
	I1004 03:21:17.046092   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.046103   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.046113   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.049599   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.245822   30630 request.go:632] Waited for 195.387196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:17.245913   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:17.245920   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.245929   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.245937   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.249684   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.250373   30630 pod_ready.go:93] pod "etcd-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:17.250391   30630 pod_ready.go:82] duration metric: took 362.785163ms for pod "etcd-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.250406   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.445530   30630 request.go:632] Waited for 195.055856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:21:17.445608   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:21:17.445614   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.445621   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.445627   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.449209   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.645177   30630 request.go:632] Waited for 195.266127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:17.645277   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:17.645290   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.645300   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.645307   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.648339   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.648978   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:17.648997   30630 pod_ready.go:82] duration metric: took 398.583614ms for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.649010   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.845996   30630 request.go:632] Waited for 196.900731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:21:17.846073   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:21:17.846082   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.846092   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.846097   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.849729   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.045771   30630 request.go:632] Waited for 195.364695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:18.045824   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:18.045829   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.045837   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.045843   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.049741   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.050457   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:18.050479   30630 pod_ready.go:82] duration metric: took 401.458645ms for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.050491   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.245708   30630 request.go:632] Waited for 195.123371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m03
	I1004 03:21:18.245779   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m03
	I1004 03:21:18.245788   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.245798   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.245805   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.248803   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:18.445802   30630 request.go:632] Waited for 196.359557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:18.445880   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:18.445891   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.445903   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.445912   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.449153   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.449859   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:18.449875   30630 pod_ready.go:82] duration metric: took 399.376745ms for pod "kube-apiserver-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.449884   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.646109   30630 request.go:632] Waited for 196.148252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:21:18.646174   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:21:18.646181   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.646190   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.646196   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.649910   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.845959   30630 request.go:632] Waited for 195.355273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:18.846052   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:18.846066   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.846077   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.846084   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.849452   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.849983   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:18.849999   30630 pod_ready.go:82] duration metric: took 400.109282ms for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.850007   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.045892   30630 request.go:632] Waited for 195.812536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:21:19.045949   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:21:19.045954   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.045962   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.045965   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.049481   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.245703   30630 request.go:632] Waited for 195.37604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:19.245795   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:19.245807   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.245816   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.245821   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.249221   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.249770   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:19.249786   30630 pod_ready.go:82] duration metric: took 399.773598ms for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.249797   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.445959   30630 request.go:632] Waited for 196.084722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m03
	I1004 03:21:19.446017   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m03
	I1004 03:21:19.446023   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.446030   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.446034   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.449595   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.646055   30630 request.go:632] Waited for 195.452676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:19.646103   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:19.646110   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.646121   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.646126   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.649308   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.649980   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:19.650000   30630 pod_ready.go:82] duration metric: took 400.193489ms for pod "kube-controller-manager-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.650010   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9q6q2" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.846046   30630 request.go:632] Waited for 195.979747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q6q2
	I1004 03:21:19.846103   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q6q2
	I1004 03:21:19.846109   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.846116   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.846121   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.850032   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.045346   30630 request.go:632] Waited for 194.290233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:20.045412   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:20.045419   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.045429   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.045435   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.049187   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.049735   30630 pod_ready.go:93] pod "kube-proxy-9q6q2" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:20.049758   30630 pod_ready.go:82] duration metric: took 399.740576ms for pod "kube-proxy-9q6q2" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.049773   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.245829   30630 request.go:632] Waited for 195.994651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:21:20.245916   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:21:20.245926   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.245933   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.245938   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.248898   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:20.445831   30630 request.go:632] Waited for 196.355752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:20.445904   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:20.445910   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.445921   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.445925   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.449843   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.450548   30630 pod_ready.go:93] pod "kube-proxy-f44b9" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:20.450575   30630 pod_ready.go:82] duration metric: took 400.789271ms for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.450587   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.645991   30630 request.go:632] Waited for 195.320241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:21:20.646051   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:21:20.646061   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.646072   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.646084   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.649526   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.845351   30630 request.go:632] Waited for 195.084601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:20.845415   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:20.845423   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.845433   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.845439   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.849107   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.849683   30630 pod_ready.go:93] pod "kube-proxy-ph6cf" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:20.849702   30630 pod_ready.go:82] duration metric: took 399.106228ms for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.849714   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.046211   30630 request.go:632] Waited for 196.431281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:21:21.046274   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:21:21.046287   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.046297   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.046303   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.049644   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.245652   30630 request.go:632] Waited for 195.357611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:21.245701   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:21.245707   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.245717   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.245729   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.248937   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.249459   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:21.249477   30630 pod_ready.go:82] duration metric: took 399.754955ms for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.249485   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.445624   30630 request.go:632] Waited for 196.058326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:21:21.445695   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:21:21.445700   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.445708   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.445713   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.449658   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.645861   30630 request.go:632] Waited for 195.383024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:21.645947   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:21.645959   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.646444   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.646457   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.649535   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.650129   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:21.650145   30630 pod_ready.go:82] duration metric: took 400.653773ms for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.650155   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.846280   30630 request.go:632] Waited for 196.044885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m03
	I1004 03:21:21.846336   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m03
	I1004 03:21:21.846342   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.846349   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.846354   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.849713   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:22.045755   30630 request.go:632] Waited for 195.414064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:22.045827   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:22.045834   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.045841   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.045847   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.049538   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:22.050359   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:22.050378   30630 pod_ready.go:82] duration metric: took 400.213357ms for pod "kube-scheduler-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:22.050389   30630 pod_ready.go:39] duration metric: took 5.201387664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:21:22.050412   30630 api_server.go:52] waiting for apiserver process to appear ...
	I1004 03:21:22.050477   30630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:21:22.066998   30630 api_server.go:72] duration metric: took 23.53720299s to wait for apiserver process to appear ...
	I1004 03:21:22.067023   30630 api_server.go:88] waiting for apiserver healthz status ...
	I1004 03:21:22.067042   30630 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I1004 03:21:22.074791   30630 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I1004 03:21:22.074864   30630 round_trippers.go:463] GET https://192.168.39.65:8443/version
	I1004 03:21:22.074872   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.074885   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.074896   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.075865   30630 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1004 03:21:22.075921   30630 api_server.go:141] control plane version: v1.31.1
	I1004 03:21:22.075934   30630 api_server.go:131] duration metric: took 8.905409ms to wait for apiserver health ...
	I1004 03:21:22.075941   30630 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 03:21:22.245389   30630 request.go:632] Waited for 169.386949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.245481   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.245490   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.245505   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.245516   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.251617   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:21:22.258944   30630 system_pods.go:59] 24 kube-system pods found
	I1004 03:21:22.258969   30630 system_pods.go:61] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:21:22.258974   30630 system_pods.go:61] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:21:22.258980   30630 system_pods.go:61] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:21:22.258984   30630 system_pods.go:61] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:21:22.258987   30630 system_pods.go:61] "etcd-ha-994751-m03" [610c4e0c-9af8-441e-9524-ccd6fe6fe390] Running
	I1004 03:21:22.258990   30630 system_pods.go:61] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:21:22.258992   30630 system_pods.go:61] "kindnet-clt5p" [a904ebc8-f149-4b9f-9637-a37cb56af836] Running
	I1004 03:21:22.258994   30630 system_pods.go:61] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:21:22.258997   30630 system_pods.go:61] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:21:22.259012   30630 system_pods.go:61] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:21:22.259017   30630 system_pods.go:61] "kube-apiserver-ha-994751-m03" [42150ae1-b298-4974-976f-05e9a2a32154] Running
	I1004 03:21:22.259020   30630 system_pods.go:61] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:21:22.259023   30630 system_pods.go:61] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:21:22.259027   30630 system_pods.go:61] "kube-controller-manager-ha-994751-m03" [5897468d-7872-4fed-81bc-bf9b37e42ef4] Running
	I1004 03:21:22.259030   30630 system_pods.go:61] "kube-proxy-9q6q2" [a3b96ca0-fe8c-4492-a05c-5f8ff9cb8d3f] Running
	I1004 03:21:22.259033   30630 system_pods.go:61] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:21:22.259036   30630 system_pods.go:61] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:21:22.259039   30630 system_pods.go:61] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:21:22.259042   30630 system_pods.go:61] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:21:22.259046   30630 system_pods.go:61] "kube-scheduler-ha-994751-m03" [f53fda60-a075-4f78-a64b-52e960a4b28b] Running
	I1004 03:21:22.259048   30630 system_pods.go:61] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:21:22.259051   30630 system_pods.go:61] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:21:22.259054   30630 system_pods.go:61] "kube-vip-ha-994751-m03" [9ec22347-f3d6-419e-867a-0de177976203] Running
	I1004 03:21:22.259056   30630 system_pods.go:61] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:21:22.259062   30630 system_pods.go:74] duration metric: took 183.116626ms to wait for pod list to return data ...
	I1004 03:21:22.259072   30630 default_sa.go:34] waiting for default service account to be created ...
	I1004 03:21:22.445504   30630 request.go:632] Waited for 186.355323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:21:22.445557   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:21:22.445563   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.445570   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.445575   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.449437   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:22.449567   30630 default_sa.go:45] found service account: "default"
	I1004 03:21:22.449589   30630 default_sa.go:55] duration metric: took 190.510625ms for default service account to be created ...
	I1004 03:21:22.449599   30630 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 03:21:22.646023   30630 request.go:632] Waited for 196.345892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.646077   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.646096   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.646106   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.646115   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.652169   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:21:22.660351   30630 system_pods.go:86] 24 kube-system pods found
	I1004 03:21:22.660376   30630 system_pods.go:89] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:21:22.660386   30630 system_pods.go:89] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:21:22.660391   30630 system_pods.go:89] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:21:22.660395   30630 system_pods.go:89] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:21:22.660398   30630 system_pods.go:89] "etcd-ha-994751-m03" [610c4e0c-9af8-441e-9524-ccd6fe6fe390] Running
	I1004 03:21:22.660402   30630 system_pods.go:89] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:21:22.660405   30630 system_pods.go:89] "kindnet-clt5p" [a904ebc8-f149-4b9f-9637-a37cb56af836] Running
	I1004 03:21:22.660408   30630 system_pods.go:89] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:21:22.660412   30630 system_pods.go:89] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:21:22.660416   30630 system_pods.go:89] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:21:22.660419   30630 system_pods.go:89] "kube-apiserver-ha-994751-m03" [42150ae1-b298-4974-976f-05e9a2a32154] Running
	I1004 03:21:22.660423   30630 system_pods.go:89] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:21:22.660426   30630 system_pods.go:89] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:21:22.660432   30630 system_pods.go:89] "kube-controller-manager-ha-994751-m03" [5897468d-7872-4fed-81bc-bf9b37e42ef4] Running
	I1004 03:21:22.660437   30630 system_pods.go:89] "kube-proxy-9q6q2" [a3b96ca0-fe8c-4492-a05c-5f8ff9cb8d3f] Running
	I1004 03:21:22.660440   30630 system_pods.go:89] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:21:22.660443   30630 system_pods.go:89] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:21:22.660450   30630 system_pods.go:89] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:21:22.660453   30630 system_pods.go:89] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:21:22.660456   30630 system_pods.go:89] "kube-scheduler-ha-994751-m03" [f53fda60-a075-4f78-a64b-52e960a4b28b] Running
	I1004 03:21:22.660465   30630 system_pods.go:89] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:21:22.660470   30630 system_pods.go:89] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:21:22.660473   30630 system_pods.go:89] "kube-vip-ha-994751-m03" [9ec22347-f3d6-419e-867a-0de177976203] Running
	I1004 03:21:22.660476   30630 system_pods.go:89] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:21:22.660481   30630 system_pods.go:126] duration metric: took 210.876444ms to wait for k8s-apps to be running ...
	I1004 03:21:22.660493   30630 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 03:21:22.660540   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:21:22.675933   30630 system_svc.go:56] duration metric: took 15.434198ms WaitForService to wait for kubelet
	I1004 03:21:22.675957   30630 kubeadm.go:582] duration metric: took 24.146164676s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:21:22.675972   30630 node_conditions.go:102] verifying NodePressure condition ...
	I1004 03:21:22.845860   30630 request.go:632] Waited for 169.820621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes
	I1004 03:21:22.845932   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes
	I1004 03:21:22.845941   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.845948   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.845959   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.850058   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:22.851493   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:21:22.851511   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:21:22.851521   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:21:22.851525   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:21:22.851529   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:21:22.851534   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:21:22.851538   30630 node_conditions.go:105] duration metric: took 175.561582ms to run NodePressure ...
	I1004 03:21:22.851551   30630 start.go:241] waiting for startup goroutines ...
	I1004 03:21:22.851569   30630 start.go:255] writing updated cluster config ...
	I1004 03:21:22.851861   30630 ssh_runner.go:195] Run: rm -f paused
	I1004 03:21:22.904494   30630 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 03:21:22.906685   30630 out.go:177] * Done! kubectl is now configured to use "ha-994751" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.486504417Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012317486479482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9cc55ec2-f7e2-4232-b12f-78566ebd46e8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.487075109Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8046d3b9-23ab-41e6-b1be-d619dfda185f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.487160920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8046d3b9-23ab-41e6-b1be-d619dfda185f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.487376723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8046d3b9-23ab-41e6-b1be-d619dfda185f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.527673731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0fb6ea5-7ca6-479f-8c1c-9a20855c2317 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.527745842Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0fb6ea5-7ca6-479f-8c1c-9a20855c2317 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.529190877Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73134d50-cb34-469d-b66f-40f6d0e65202 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.529743824Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012317529717847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73134d50-cb34-469d-b66f-40f6d0e65202 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.530488977Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d41e27e-e591-448d-8458-efb0be0f6d5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.530558917Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d41e27e-e591-448d-8458-efb0be0f6d5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.530794637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d41e27e-e591-448d-8458-efb0be0f6d5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.571468164Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4271a925-6565-4415-92ec-41203fabc7ef name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.571561098Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4271a925-6565-4415-92ec-41203fabc7ef name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.573412819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cea29ac-065f-403c-bf0f-20b1f9c2d774 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.574553118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012317574518556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cea29ac-065f-403c-bf0f-20b1f9c2d774 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.575405092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8a83a55-7588-491f-ba05-a923cdac35b1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.575464732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8a83a55-7588-491f-ba05-a923cdac35b1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.575730567Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8a83a55-7588-491f-ba05-a923cdac35b1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.620317416Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=acc547a9-e127-4fe0-8e0e-8255ae6f51e8 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.620406798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=acc547a9-e127-4fe0-8e0e-8255ae6f51e8 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.627227302Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebe53d11-4b92-4957-9e16-35481f485917 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.627849116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012317627820960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebe53d11-4b92-4957-9e16-35481f485917 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.628589912Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee2eab1c-7990-43f6-8b1d-0d4b4671ffb8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.628663363Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee2eab1c-7990-43f6-8b1d-0d4b4671ffb8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:17 ha-994751 crio[664]: time="2024-10-04 03:25:17.628880402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee2eab1c-7990-43f6-8b1d-0d4b4671ffb8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9dd8849f48bb1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   21e8386b77b62       busybox-7dff88458-vh5j6
	2fe1e8ec5dfe4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   dab235bc541ca       storage-provisioner
	eb082a979b36c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   be9b34d6ca0bf       coredns-7c65d6cfc9-zgdck
	93aa8fd39f9c0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   d9a5ca3b325fa       coredns-7c65d6cfc9-l6zst
	6a3f40105608f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   454652c11f4fe       kindnet-2mhh2
	731622c5caa6f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   44f2b282edd57       kube-proxy-f44b9
	8830f0c28d759       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   5461b35eef9c3       kube-vip-ha-994751
	e49d081b73667       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   0372e9d489f05       kube-scheduler-ha-994751
	f5568cb7839e2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   c61920ab308f6       etcd-ha-994751
	849282c506754       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   6d7ea048eea90       kube-apiserver-ha-994751
	f041d718c872f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   8c1c0f1b1a430       kube-controller-manager-ha-994751
	
	
	==> coredns [93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd] <==
	[INFO] 10.244.2.2:42178 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.010745169s
	[INFO] 10.244.2.2:34829 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009009564s
	[INFO] 10.244.0.4:43910 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001485572s
	[INFO] 10.244.1.2:45378 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000181404s
	[INFO] 10.244.1.2:40886 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001942971s
	[INFO] 10.244.2.2:45461 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217787s
	[INFO] 10.244.2.2:56545 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000167289s
	[INFO] 10.244.2.2:52063 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000246892s
	[INFO] 10.244.0.4:48765 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150103s
	[INFO] 10.244.1.2:53871 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168625s
	[INFO] 10.244.1.2:58325 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001736755s
	[INFO] 10.244.1.2:38700 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085818s
	[INFO] 10.244.2.2:53525 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016163s
	[INFO] 10.244.2.2:55339 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126355s
	[INFO] 10.244.0.4:33506 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176834s
	[INFO] 10.244.0.4:47714 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136674s
	[INFO] 10.244.0.4:49593 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139876s
	[INFO] 10.244.1.2:51243 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137889s
	[INFO] 10.244.2.2:56043 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000221873s
	[INFO] 10.244.2.2:35783 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138959s
	[INFO] 10.244.0.4:37503 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013937s
	[INFO] 10.244.0.4:46310 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132408s
	[INFO] 10.244.0.4:35014 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074557s
	[INFO] 10.244.1.2:51803 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153481s
	[INFO] 10.244.1.2:47758 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000198394s
	
	
	==> coredns [eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586] <==
	[INFO] 10.244.2.2:43924 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.01283325s
	[INFO] 10.244.2.2:35798 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148903s
	[INFO] 10.244.0.4:59562 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140549s
	[INFO] 10.244.0.4:41362 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002209213s
	[INFO] 10.244.0.4:41786 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133758s
	[INFO] 10.244.0.4:49269 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001539557s
	[INFO] 10.244.0.4:56941 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018736s
	[INFO] 10.244.0.4:47984 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173422s
	[INFO] 10.244.0.4:41970 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061431s
	[INFO] 10.244.1.2:32918 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119893s
	[INFO] 10.244.1.2:39792 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093113s
	[INFO] 10.244.1.2:41331 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001259323s
	[INFO] 10.244.1.2:45464 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106483s
	[INFO] 10.244.1.2:35852 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153198s
	[INFO] 10.244.2.2:38240 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114031s
	[INFO] 10.244.2.2:54004 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008059s
	[INFO] 10.244.0.4:39542 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092418s
	[INFO] 10.244.1.2:41262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166812s
	[INFO] 10.244.1.2:55889 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146278s
	[INFO] 10.244.1.2:35654 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131643s
	[INFO] 10.244.2.2:37029 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012813s
	[INFO] 10.244.2.2:33774 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000223324s
	[INFO] 10.244.0.4:38911 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138291s
	[INFO] 10.244.1.2:56619 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093621s
	[INFO] 10.244.1.2:33622 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154511s
	
	
	==> describe nodes <==
	Name:               ha-994751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T03_18_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:18:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:18:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:18:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:18:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:19:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    ha-994751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7452b105a68246eeb61757acefd7f693
	  System UUID:                7452b105-a682-46ee-b617-57acefd7f693
	  Boot ID:                    aecf415c-e5c2-46a9-81d5-d95311218d51
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vh5j6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 coredns-7c65d6cfc9-l6zst             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m25s
	  kube-system                 coredns-7c65d6cfc9-zgdck             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m25s
	  kube-system                 etcd-ha-994751                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m30s
	  kube-system                 kindnet-2mhh2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m25s
	  kube-system                 kube-apiserver-ha-994751             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-controller-manager-ha-994751    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-proxy-f44b9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-scheduler-ha-994751             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-vip-ha-994751                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m23s  kube-proxy       
	  Normal  Starting                 6m30s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m30s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m30s  kubelet          Node ha-994751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s  kubelet          Node ha-994751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s  kubelet          Node ha-994751 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m26s  node-controller  Node ha-994751 event: Registered Node ha-994751 in Controller
	  Normal  NodeReady                6m11s  kubelet          Node ha-994751 status is now: NodeReady
	  Normal  RegisteredNode           5m30s  node-controller  Node ha-994751 event: Registered Node ha-994751 in Controller
	  Normal  RegisteredNode           4m14s  node-controller  Node ha-994751 event: Registered Node ha-994751 in Controller
	
	
	Name:               ha-994751-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_04T03_19_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:19:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:22:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    ha-994751-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6683e6a9e1244f787f84f2a5c1bf490
	  System UUID:                f6683e6a-9e12-44f7-87f8-4f2a5c1bf490
	  Boot ID:                    8b02ddc0-820d-4de5-b649-7e2202f66ea5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wc5kg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 etcd-ha-994751-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m37s
	  kube-system                 kindnet-rmcvt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m39s
	  kube-system                 kube-apiserver-ha-994751-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-controller-manager-ha-994751-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-proxy-ph6cf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-scheduler-ha-994751-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-vip-ha-994751-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m34s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m39s (x8 over 5m39s)  kubelet          Node ha-994751-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m39s (x8 over 5m39s)  kubelet          Node ha-994751-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m39s (x7 over 5m39s)  kubelet          Node ha-994751-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-994751-m02 event: Registered Node ha-994751-m02 in Controller
	  Normal  RegisteredNode           5m31s                  node-controller  Node ha-994751-m02 event: Registered Node ha-994751-m02 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-994751-m02 event: Registered Node ha-994751-m02 in Controller
	  Normal  NodeNotReady             114s                   node-controller  Node ha-994751-m02 status is now: NodeNotReady
	
	
	Name:               ha-994751-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_04T03_20_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:20:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:20:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:20:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:20:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:21:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-994751-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 df18b27d8a2e4c8893a601b97ec7e8e0
	  System UUID:                df18b27d-8a2e-4c88-93a6-01b97ec7e8e0
	  Boot ID:                    138aa962-c7a2-47ea-82c1-2a5ccfbc3de0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nrdqk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 etcd-ha-994751-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m22s
	  kube-system                 kindnet-clt5p                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m24s
	  kube-system                 kube-apiserver-ha-994751-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-controller-manager-ha-994751-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-proxy-9q6q2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-scheduler-ha-994751-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-vip-ha-994751-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m24s (x8 over 4m24s)  kubelet          Node ha-994751-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m24s (x8 over 4m24s)  kubelet          Node ha-994751-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m24s (x7 over 4m24s)  kubelet          Node ha-994751-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-994751-m03 event: Registered Node ha-994751-m03 in Controller
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-994751-m03 event: Registered Node ha-994751-m03 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-994751-m03 event: Registered Node ha-994751-m03 in Controller
	
	
	Name:               ha-994751-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_04T03_22_03_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:22:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-994751-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d61802e745d4414c8e0a1c3e5c1319f7
	  System UUID:                d61802e7-45d4-414c-8e0a-1c3e5c1319f7
	  Boot ID:                    f154d01f-d315-40b5-84e6-0d0b669735cf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-sggz9       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m15s
	  kube-system                 kube-proxy-xsz4w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m9s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  3m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-994751-m04 event: Registered Node ha-994751-m04 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-994751-m04 event: Registered Node ha-994751-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m15s (x2 over 3m16s)  kubelet          Node ha-994751-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m15s (x2 over 3m16s)  kubelet          Node ha-994751-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m15s (x2 over 3m16s)  kubelet          Node ha-994751-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-994751-m04 event: Registered Node ha-994751-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-994751-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 4 03:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050646] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040240] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.800548] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.470270] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.581508] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.982603] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.059297] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061306] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.198058] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.129574] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.276832] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.888308] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +3.806908] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.054958] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.117103] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.085956] kauditd_printk_skb: 79 callbacks suppressed
	[  +7.063470] kauditd_printk_skb: 21 callbacks suppressed
	[Oct 4 03:19] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.285701] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec] <==
	{"level":"warn","ts":"2024-10-04T03:25:17.783622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:17.874023Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:17.894828Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:17.902811Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:17.906879Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:17.922763Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:17.931703Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:17.932600Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:17.949038Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:17.956294Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:17.965262Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:17.974663Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:17.975027Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:18.019280Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:18.034101Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:18.035182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:18.117252Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:18.133782Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:18.141857Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:18.147602Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:18.152603Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:18.158174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:18.166092Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:18.174149Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:18.174601Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 03:25:18 up 7 min,  0 users,  load average: 0.12, 0.16, 0.09
	Linux ha-994751 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99] <==
	I1004 03:24:46.000568       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	I1004 03:24:55.996427       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1004 03:24:55.996581       1 main.go:299] handling current node
	I1004 03:24:55.996609       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I1004 03:24:55.996628       1 main.go:322] Node ha-994751-m02 has CIDR [10.244.1.0/24] 
	I1004 03:24:55.996891       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1004 03:24:55.997045       1 main.go:322] Node ha-994751-m03 has CIDR [10.244.2.0/24] 
	I1004 03:24:55.997190       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I1004 03:24:55.997280       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	I1004 03:25:05.999244       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I1004 03:25:05.999341       1 main.go:322] Node ha-994751-m02 has CIDR [10.244.1.0/24] 
	I1004 03:25:05.999525       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1004 03:25:05.999565       1 main.go:322] Node ha-994751-m03 has CIDR [10.244.2.0/24] 
	I1004 03:25:05.999630       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I1004 03:25:05.999660       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	I1004 03:25:05.999742       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1004 03:25:05.999771       1 main.go:299] handling current node
	I1004 03:25:16.002618       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1004 03:25:16.002727       1 main.go:299] handling current node
	I1004 03:25:16.002764       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I1004 03:25:16.002782       1 main.go:322] Node ha-994751-m02 has CIDR [10.244.1.0/24] 
	I1004 03:25:16.003010       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1004 03:25:16.003037       1 main.go:322] Node ha-994751-m03 has CIDR [10.244.2.0/24] 
	I1004 03:25:16.003121       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I1004 03:25:16.003140       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe] <==
	I1004 03:18:46.533293       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:18:46.536324       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 03:18:46.567509       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.65]
	I1004 03:18:46.569728       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:18:46.579199       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:18:47.324394       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:18:47.342483       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 03:18:47.354293       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:18:52.030260       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:18:52.131882       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1004 03:21:29.605335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53690: use of closed network connection
	E1004 03:21:29.795618       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53702: use of closed network connection
	E1004 03:21:29.974284       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53722: use of closed network connection
	E1004 03:21:30.184885       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53734: use of closed network connection
	E1004 03:21:30.399362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53748: use of closed network connection
	E1004 03:21:30.586499       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53770: use of closed network connection
	E1004 03:21:30.773657       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53776: use of closed network connection
	E1004 03:21:30.946921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53796: use of closed network connection
	E1004 03:21:31.140751       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53812: use of closed network connection
	E1004 03:21:31.439406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53848: use of closed network connection
	E1004 03:21:31.610289       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53874: use of closed network connection
	E1004 03:21:31.791527       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53896: use of closed network connection
	E1004 03:21:31.973829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53924: use of closed network connection
	E1004 03:21:32.157183       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53938: use of closed network connection
	E1004 03:21:32.326553       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53952: use of closed network connection
	
	
	==> kube-controller-manager [f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8] <==
	I1004 03:22:03.059069       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-994751-m04" podCIDRs=["10.244.3.0/24"]
	I1004 03:22:03.059118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.061876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.076574       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.137039       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.276697       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.662795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.977537       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:04.044472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:06.344839       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:06.345923       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-994751-m04"
	I1004 03:22:06.383881       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:13.412719       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:24.487665       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-994751-m04"
	I1004 03:22:24.487754       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:24.502742       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:26.362397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:33.863379       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:23:24.007837       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-994751-m04"
	I1004 03:23:24.008551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	I1004 03:23:24.038687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	I1004 03:23:24.187288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.759379ms"
	I1004 03:23:24.187415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.69µs"
	I1004 03:23:26.454826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	I1004 03:23:29.201808       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	
	
	==> kube-proxy [731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:18:54.520708       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:18:54.543515       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.65"]
	E1004 03:18:54.543642       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:18:54.585531       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:18:54.585592       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:18:54.585623       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:18:54.595069       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:18:54.598246       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:18:54.598343       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:18:54.602801       1 config.go:199] "Starting service config controller"
	I1004 03:18:54.603172       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:18:54.603521       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:18:54.603587       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:18:54.607605       1 config.go:328] "Starting node config controller"
	I1004 03:18:54.607621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:18:54.704654       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:18:54.704732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:18:54.707708       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec] <==
	W1004 03:18:45.760588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:18:45.760709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:18:45.902575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:18:45.902704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:18:45.937221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:18:45.937512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:18:46.030883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 03:18:46.031049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1004 03:18:48.095287       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1004 03:22:03.109132       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zh45q\": pod kindnet-zh45q is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zh45q" node="ha-994751-m04"
	E1004 03:22:03.113875       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cc0c3789-7dca-4ede-a355-9ac6d9db68c2(kube-system/kindnet-zh45q) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-zh45q"
	E1004 03:22:03.114052       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zh45q\": pod kindnet-zh45q is already assigned to node \"ha-994751-m04\"" pod="kube-system/kindnet-zh45q"
	I1004 03:22:03.114143       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zh45q" node="ha-994751-m04"
	E1004 03:22:03.121368       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xsz4w\": pod kube-proxy-xsz4w is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xsz4w" node="ha-994751-m04"
	E1004 03:22:03.121569       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4f6e672a-e80b-4f45-b3a5-98dfa1ebaad3(kube-system/kube-proxy-xsz4w) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xsz4w"
	E1004 03:22:03.121624       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xsz4w\": pod kube-proxy-xsz4w is already assigned to node \"ha-994751-m04\"" pod="kube-system/kube-proxy-xsz4w"
	I1004 03:22:03.121686       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xsz4w" node="ha-994751-m04"
	E1004 03:22:03.177157       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zbb9z\": pod kube-proxy-zbb9z is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zbb9z" node="ha-994751-m04"
	E1004 03:22:03.177330       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a7948b15-0522-4cbd-8803-8c139b2e791a(kube-system/kube-proxy-zbb9z) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-zbb9z"
	E1004 03:22:03.177379       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zbb9z\": pod kube-proxy-zbb9z is already assigned to node \"ha-994751-m04\"" pod="kube-system/kube-proxy-zbb9z"
	I1004 03:22:03.177445       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zbb9z" node="ha-994751-m04"
	E1004 03:22:03.177921       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qfb5r\": pod kindnet-qfb5r is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qfb5r" node="ha-994751-m04"
	E1004 03:22:03.181030       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 085d0454-1ccc-408e-ae12-366c29ab0a15(kube-system/kindnet-qfb5r) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qfb5r"
	E1004 03:22:03.181113       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qfb5r\": pod kindnet-qfb5r is already assigned to node \"ha-994751-m04\"" pod="kube-system/kindnet-qfb5r"
	I1004 03:22:03.181162       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qfb5r" node="ha-994751-m04"
	
	
	==> kubelet <==
	Oct 04 03:23:47 ha-994751 kubelet[1305]: E1004 03:23:47.373529    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012227373073617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:23:47 ha-994751 kubelet[1305]: E1004 03:23:47.373558    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012227373073617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:23:57 ha-994751 kubelet[1305]: E1004 03:23:57.376221    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012237375404117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:23:57 ha-994751 kubelet[1305]: E1004 03:23:57.376607    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012237375404117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:07 ha-994751 kubelet[1305]: E1004 03:24:07.379453    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012247378682972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:07 ha-994751 kubelet[1305]: E1004 03:24:07.379509    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012247378682972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:17 ha-994751 kubelet[1305]: E1004 03:24:17.381784    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012257381348480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:17 ha-994751 kubelet[1305]: E1004 03:24:17.382305    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012257381348480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:27 ha-994751 kubelet[1305]: E1004 03:24:27.387309    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012267384211934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:27 ha-994751 kubelet[1305]: E1004 03:24:27.387674    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012267384211934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:37 ha-994751 kubelet[1305]: E1004 03:24:37.389662    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012277389023499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:37 ha-994751 kubelet[1305]: E1004 03:24:37.390147    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012277389023499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:47 ha-994751 kubelet[1305]: E1004 03:24:47.337368    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:24:47 ha-994751 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:24:47 ha-994751 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:24:47 ha-994751 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:24:47 ha-994751 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:24:47 ha-994751 kubelet[1305]: E1004 03:24:47.393080    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012287392471580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:47 ha-994751 kubelet[1305]: E1004 03:24:47.393113    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012287392471580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:57 ha-994751 kubelet[1305]: E1004 03:24:57.395248    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012297394773017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:57 ha-994751 kubelet[1305]: E1004 03:24:57.395590    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012297394773017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:25:07 ha-994751 kubelet[1305]: E1004 03:25:07.398270    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012307397806386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:25:07 ha-994751 kubelet[1305]: E1004 03:25:07.398317    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012307397806386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:25:17 ha-994751 kubelet[1305]: E1004 03:25:17.401131    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012317400306587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:25:17 ha-994751 kubelet[1305]: E1004 03:25:17.401184    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012317400306587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-994751 -n ha-994751
helpers_test.go:261: (dbg) Run:  kubectl --context ha-994751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.962083332s)
ha_test.go:309: expected profile "ha-994751" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-994751\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-994751\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-994751\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.65\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.117\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.53\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.134\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"me
tallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":2
62144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-994751 -n ha-994751
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-994751 logs -n 25: (1.449234501s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751:/home/docker/cp-test_ha-994751-m03_ha-994751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751 sudo cat                                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m02:/home/docker/cp-test_ha-994751-m03_ha-994751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m02 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04:/home/docker/cp-test_ha-994751-m03_ha-994751-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m04 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp testdata/cp-test.txt                                                | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1363640037/001/cp-test_ha-994751-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751:/home/docker/cp-test_ha-994751-m04_ha-994751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751 sudo cat                                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m02:/home/docker/cp-test_ha-994751-m04_ha-994751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m02 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03:/home/docker/cp-test_ha-994751-m04_ha-994751-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m03 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-994751 node stop m02 -v=7                                                     | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-994751 node start m02 -v=7                                                    | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 03:18:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 03:18:05.722757   30630 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:18:05.722861   30630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:18:05.722866   30630 out.go:358] Setting ErrFile to fd 2...
	I1004 03:18:05.722871   30630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:18:05.723051   30630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 03:18:05.723672   30630 out.go:352] Setting JSON to false
	I1004 03:18:05.724646   30630 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3631,"bootTime":1728008255,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 03:18:05.724743   30630 start.go:139] virtualization: kvm guest
	I1004 03:18:05.726903   30630 out.go:177] * [ha-994751] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 03:18:05.728435   30630 notify.go:220] Checking for updates...
	I1004 03:18:05.728459   30630 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:18:05.730163   30630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:18:05.731580   30630 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:18:05.733048   30630 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:05.734449   30630 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 03:18:05.735914   30630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:18:05.737675   30630 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:18:05.774405   30630 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 03:18:05.775959   30630 start.go:297] selected driver: kvm2
	I1004 03:18:05.775980   30630 start.go:901] validating driver "kvm2" against <nil>
	I1004 03:18:05.775993   30630 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:18:05.776759   30630 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:18:05.776855   30630 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 03:18:05.791915   30630 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 03:18:05.791974   30630 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 03:18:05.792218   30630 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:18:05.792245   30630 cni.go:84] Creating CNI manager for ""
	I1004 03:18:05.792281   30630 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1004 03:18:05.792289   30630 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 03:18:05.792342   30630 start.go:340] cluster config:
	{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1004 03:18:05.792429   30630 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:18:05.794321   30630 out.go:177] * Starting "ha-994751" primary control-plane node in "ha-994751" cluster
	I1004 03:18:05.795797   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:18:05.795855   30630 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 03:18:05.795867   30630 cache.go:56] Caching tarball of preloaded images
	I1004 03:18:05.795948   30630 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:18:05.795958   30630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:18:05.796250   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:05.796278   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json: {Name:mk8f786fa93ab6935652e46df2caeb1892ffd1fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:05.796426   30630 start.go:360] acquireMachinesLock for ha-994751: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:18:05.796455   30630 start.go:364] duration metric: took 15.921µs to acquireMachinesLock for "ha-994751"
	I1004 03:18:05.796470   30630 start.go:93] Provisioning new machine with config: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:18:05.796525   30630 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 03:18:05.798287   30630 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 03:18:05.798440   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:05.798475   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:05.812686   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34047
	I1004 03:18:05.813143   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:05.813678   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:05.813709   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:05.814066   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:05.814254   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:05.814407   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:05.814549   30630 start.go:159] libmachine.API.Create for "ha-994751" (driver="kvm2")
	I1004 03:18:05.814572   30630 client.go:168] LocalClient.Create starting
	I1004 03:18:05.814612   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 03:18:05.814645   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:05.814661   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:05.814721   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 03:18:05.814738   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:05.814750   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:05.814764   30630 main.go:141] libmachine: Running pre-create checks...
	I1004 03:18:05.814779   30630 main.go:141] libmachine: (ha-994751) Calling .PreCreateCheck
	I1004 03:18:05.815056   30630 main.go:141] libmachine: (ha-994751) Calling .GetConfigRaw
	I1004 03:18:05.815402   30630 main.go:141] libmachine: Creating machine...
	I1004 03:18:05.815413   30630 main.go:141] libmachine: (ha-994751) Calling .Create
	I1004 03:18:05.815566   30630 main.go:141] libmachine: (ha-994751) Creating KVM machine...
	I1004 03:18:05.816861   30630 main.go:141] libmachine: (ha-994751) DBG | found existing default KVM network
	I1004 03:18:05.817536   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:05.817406   30653 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I1004 03:18:05.817563   30630 main.go:141] libmachine: (ha-994751) DBG | created network xml: 
	I1004 03:18:05.817586   30630 main.go:141] libmachine: (ha-994751) DBG | <network>
	I1004 03:18:05.817592   30630 main.go:141] libmachine: (ha-994751) DBG |   <name>mk-ha-994751</name>
	I1004 03:18:05.817597   30630 main.go:141] libmachine: (ha-994751) DBG |   <dns enable='no'/>
	I1004 03:18:05.817602   30630 main.go:141] libmachine: (ha-994751) DBG |   
	I1004 03:18:05.817610   30630 main.go:141] libmachine: (ha-994751) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1004 03:18:05.817615   30630 main.go:141] libmachine: (ha-994751) DBG |     <dhcp>
	I1004 03:18:05.817621   30630 main.go:141] libmachine: (ha-994751) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1004 03:18:05.817629   30630 main.go:141] libmachine: (ha-994751) DBG |     </dhcp>
	I1004 03:18:05.817644   30630 main.go:141] libmachine: (ha-994751) DBG |   </ip>
	I1004 03:18:05.817652   30630 main.go:141] libmachine: (ha-994751) DBG |   
	I1004 03:18:05.817659   30630 main.go:141] libmachine: (ha-994751) DBG | </network>
	I1004 03:18:05.817668   30630 main.go:141] libmachine: (ha-994751) DBG | 
	I1004 03:18:05.823178   30630 main.go:141] libmachine: (ha-994751) DBG | trying to create private KVM network mk-ha-994751 192.168.39.0/24...
	I1004 03:18:05.886885   30630 main.go:141] libmachine: (ha-994751) DBG | private KVM network mk-ha-994751 192.168.39.0/24 created
	I1004 03:18:05.886925   30630 main.go:141] libmachine: (ha-994751) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751 ...
	I1004 03:18:05.886940   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:05.886875   30653 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:05.886958   30630 main.go:141] libmachine: (ha-994751) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 03:18:05.887024   30630 main.go:141] libmachine: (ha-994751) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 03:18:06.142449   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:06.142299   30653 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa...
	I1004 03:18:06.210635   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:06.210526   30653 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/ha-994751.rawdisk...
	I1004 03:18:06.210664   30630 main.go:141] libmachine: (ha-994751) DBG | Writing magic tar header
	I1004 03:18:06.210677   30630 main.go:141] libmachine: (ha-994751) DBG | Writing SSH key tar header
	I1004 03:18:06.210688   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:06.210638   30653 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751 ...
	I1004 03:18:06.210755   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751
	I1004 03:18:06.210796   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751 (perms=drwx------)
	I1004 03:18:06.210813   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 03:18:06.210829   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:06.210837   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 03:18:06.210844   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 03:18:06.210850   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home/jenkins
	I1004 03:18:06.210857   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 03:18:06.210924   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 03:18:06.210944   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 03:18:06.210949   30630 main.go:141] libmachine: (ha-994751) DBG | Checking permissions on dir: /home
	I1004 03:18:06.210957   30630 main.go:141] libmachine: (ha-994751) DBG | Skipping /home - not owner
	I1004 03:18:06.210976   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 03:18:06.210990   30630 main.go:141] libmachine: (ha-994751) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 03:18:06.210999   30630 main.go:141] libmachine: (ha-994751) Creating domain...
	I1004 03:18:06.212079   30630 main.go:141] libmachine: (ha-994751) define libvirt domain using xml: 
	I1004 03:18:06.212103   30630 main.go:141] libmachine: (ha-994751) <domain type='kvm'>
	I1004 03:18:06.212112   30630 main.go:141] libmachine: (ha-994751)   <name>ha-994751</name>
	I1004 03:18:06.212118   30630 main.go:141] libmachine: (ha-994751)   <memory unit='MiB'>2200</memory>
	I1004 03:18:06.212126   30630 main.go:141] libmachine: (ha-994751)   <vcpu>2</vcpu>
	I1004 03:18:06.212132   30630 main.go:141] libmachine: (ha-994751)   <features>
	I1004 03:18:06.212140   30630 main.go:141] libmachine: (ha-994751)     <acpi/>
	I1004 03:18:06.212152   30630 main.go:141] libmachine: (ha-994751)     <apic/>
	I1004 03:18:06.212164   30630 main.go:141] libmachine: (ha-994751)     <pae/>
	I1004 03:18:06.212177   30630 main.go:141] libmachine: (ha-994751)     
	I1004 03:18:06.212187   30630 main.go:141] libmachine: (ha-994751)   </features>
	I1004 03:18:06.212192   30630 main.go:141] libmachine: (ha-994751)   <cpu mode='host-passthrough'>
	I1004 03:18:06.212196   30630 main.go:141] libmachine: (ha-994751)   
	I1004 03:18:06.212200   30630 main.go:141] libmachine: (ha-994751)   </cpu>
	I1004 03:18:06.212204   30630 main.go:141] libmachine: (ha-994751)   <os>
	I1004 03:18:06.212210   30630 main.go:141] libmachine: (ha-994751)     <type>hvm</type>
	I1004 03:18:06.212215   30630 main.go:141] libmachine: (ha-994751)     <boot dev='cdrom'/>
	I1004 03:18:06.212228   30630 main.go:141] libmachine: (ha-994751)     <boot dev='hd'/>
	I1004 03:18:06.212253   30630 main.go:141] libmachine: (ha-994751)     <bootmenu enable='no'/>
	I1004 03:18:06.212268   30630 main.go:141] libmachine: (ha-994751)   </os>
	I1004 03:18:06.212286   30630 main.go:141] libmachine: (ha-994751)   <devices>
	I1004 03:18:06.212296   30630 main.go:141] libmachine: (ha-994751)     <disk type='file' device='cdrom'>
	I1004 03:18:06.212309   30630 main.go:141] libmachine: (ha-994751)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/boot2docker.iso'/>
	I1004 03:18:06.212319   30630 main.go:141] libmachine: (ha-994751)       <target dev='hdc' bus='scsi'/>
	I1004 03:18:06.212330   30630 main.go:141] libmachine: (ha-994751)       <readonly/>
	I1004 03:18:06.212334   30630 main.go:141] libmachine: (ha-994751)     </disk>
	I1004 03:18:06.212342   30630 main.go:141] libmachine: (ha-994751)     <disk type='file' device='disk'>
	I1004 03:18:06.212354   30630 main.go:141] libmachine: (ha-994751)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 03:18:06.212370   30630 main.go:141] libmachine: (ha-994751)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/ha-994751.rawdisk'/>
	I1004 03:18:06.212380   30630 main.go:141] libmachine: (ha-994751)       <target dev='hda' bus='virtio'/>
	I1004 03:18:06.212388   30630 main.go:141] libmachine: (ha-994751)     </disk>
	I1004 03:18:06.212397   30630 main.go:141] libmachine: (ha-994751)     <interface type='network'>
	I1004 03:18:06.212406   30630 main.go:141] libmachine: (ha-994751)       <source network='mk-ha-994751'/>
	I1004 03:18:06.212415   30630 main.go:141] libmachine: (ha-994751)       <model type='virtio'/>
	I1004 03:18:06.212440   30630 main.go:141] libmachine: (ha-994751)     </interface>
	I1004 03:18:06.212460   30630 main.go:141] libmachine: (ha-994751)     <interface type='network'>
	I1004 03:18:06.212467   30630 main.go:141] libmachine: (ha-994751)       <source network='default'/>
	I1004 03:18:06.212471   30630 main.go:141] libmachine: (ha-994751)       <model type='virtio'/>
	I1004 03:18:06.212479   30630 main.go:141] libmachine: (ha-994751)     </interface>
	I1004 03:18:06.212494   30630 main.go:141] libmachine: (ha-994751)     <serial type='pty'>
	I1004 03:18:06.212502   30630 main.go:141] libmachine: (ha-994751)       <target port='0'/>
	I1004 03:18:06.212508   30630 main.go:141] libmachine: (ha-994751)     </serial>
	I1004 03:18:06.212516   30630 main.go:141] libmachine: (ha-994751)     <console type='pty'>
	I1004 03:18:06.212520   30630 main.go:141] libmachine: (ha-994751)       <target type='serial' port='0'/>
	I1004 03:18:06.212542   30630 main.go:141] libmachine: (ha-994751)     </console>
	I1004 03:18:06.212560   30630 main.go:141] libmachine: (ha-994751)     <rng model='virtio'>
	I1004 03:18:06.212574   30630 main.go:141] libmachine: (ha-994751)       <backend model='random'>/dev/random</backend>
	I1004 03:18:06.212585   30630 main.go:141] libmachine: (ha-994751)     </rng>
	I1004 03:18:06.212593   30630 main.go:141] libmachine: (ha-994751)     
	I1004 03:18:06.212602   30630 main.go:141] libmachine: (ha-994751)     
	I1004 03:18:06.212610   30630 main.go:141] libmachine: (ha-994751)   </devices>
	I1004 03:18:06.212618   30630 main.go:141] libmachine: (ha-994751) </domain>
	I1004 03:18:06.212627   30630 main.go:141] libmachine: (ha-994751) 
	I1004 03:18:06.216801   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:e9:7d:48 in network default
	I1004 03:18:06.217289   30630 main.go:141] libmachine: (ha-994751) Ensuring networks are active...
	I1004 03:18:06.217308   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:06.217978   30630 main.go:141] libmachine: (ha-994751) Ensuring network default is active
	I1004 03:18:06.218330   30630 main.go:141] libmachine: (ha-994751) Ensuring network mk-ha-994751 is active
	I1004 03:18:06.218792   30630 main.go:141] libmachine: (ha-994751) Getting domain xml...
	I1004 03:18:06.219458   30630 main.go:141] libmachine: (ha-994751) Creating domain...
	I1004 03:18:07.407094   30630 main.go:141] libmachine: (ha-994751) Waiting to get IP...
	I1004 03:18:07.407817   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:07.408229   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:07.408273   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:07.408187   30653 retry.go:31] will retry after 265.096314ms: waiting for machine to come up
	I1004 03:18:07.674734   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:07.675129   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:07.675155   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:07.675076   30653 retry.go:31] will retry after 390.620211ms: waiting for machine to come up
	I1004 03:18:08.067622   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:08.068086   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:08.068114   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:08.068031   30653 retry.go:31] will retry after 362.909556ms: waiting for machine to come up
	I1004 03:18:08.432460   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:08.432888   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:08.432909   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:08.432822   30653 retry.go:31] will retry after 609.869022ms: waiting for machine to come up
	I1004 03:18:09.044728   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:09.045180   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:09.045206   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:09.045129   30653 retry.go:31] will retry after 721.849297ms: waiting for machine to come up
	I1004 03:18:09.769005   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:09.769517   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:09.769542   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:09.769465   30653 retry.go:31] will retry after 920.066652ms: waiting for machine to come up
	I1004 03:18:10.691477   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:10.691934   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:10.691982   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:10.691880   30653 retry.go:31] will retry after 915.375779ms: waiting for machine to come up
	I1004 03:18:11.608614   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:11.609000   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:11.609026   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:11.608956   30653 retry.go:31] will retry after 1.213056064s: waiting for machine to come up
	I1004 03:18:12.823425   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:12.823843   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:12.823863   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:12.823799   30653 retry.go:31] will retry after 1.167496597s: waiting for machine to come up
	I1004 03:18:13.993222   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:13.993651   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:13.993670   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:13.993625   30653 retry.go:31] will retry after 1.774059142s: waiting for machine to come up
	I1004 03:18:15.769014   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:15.769477   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:15.769521   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:15.769420   30653 retry.go:31] will retry after 2.081580382s: waiting for machine to come up
	I1004 03:18:17.853131   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:17.853479   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:17.853503   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:17.853441   30653 retry.go:31] will retry after 3.090115259s: waiting for machine to come up
	I1004 03:18:20.945030   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:20.945469   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:20.945493   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:20.945409   30653 retry.go:31] will retry after 4.314609333s: waiting for machine to come up
	I1004 03:18:25.264846   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:25.265316   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find current IP address of domain ha-994751 in network mk-ha-994751
	I1004 03:18:25.265335   30630 main.go:141] libmachine: (ha-994751) DBG | I1004 03:18:25.265278   30653 retry.go:31] will retry after 4.302479318s: waiting for machine to come up
	I1004 03:18:29.572575   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.572946   30630 main.go:141] libmachine: (ha-994751) Found IP for machine: 192.168.39.65
	I1004 03:18:29.572975   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has current primary IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.572983   30630 main.go:141] libmachine: (ha-994751) Reserving static IP address...
	I1004 03:18:29.573371   30630 main.go:141] libmachine: (ha-994751) DBG | unable to find host DHCP lease matching {name: "ha-994751", mac: "52:54:00:9b:b2:a8", ip: "192.168.39.65"} in network mk-ha-994751
	I1004 03:18:29.642317   30630 main.go:141] libmachine: (ha-994751) DBG | Getting to WaitForSSH function...
	I1004 03:18:29.642344   30630 main.go:141] libmachine: (ha-994751) Reserved static IP address: 192.168.39.65
	I1004 03:18:29.642356   30630 main.go:141] libmachine: (ha-994751) Waiting for SSH to be available...
	I1004 03:18:29.644819   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.645174   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.645189   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.645350   30630 main.go:141] libmachine: (ha-994751) DBG | Using SSH client type: external
	I1004 03:18:29.645373   30630 main.go:141] libmachine: (ha-994751) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa (-rw-------)
	I1004 03:18:29.645433   30630 main.go:141] libmachine: (ha-994751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:18:29.645459   30630 main.go:141] libmachine: (ha-994751) DBG | About to run SSH command:
	I1004 03:18:29.645475   30630 main.go:141] libmachine: (ha-994751) DBG | exit 0
	I1004 03:18:29.768066   30630 main.go:141] libmachine: (ha-994751) DBG | SSH cmd err, output: <nil>: 
	I1004 03:18:29.768301   30630 main.go:141] libmachine: (ha-994751) KVM machine creation complete!
	I1004 03:18:29.768621   30630 main.go:141] libmachine: (ha-994751) Calling .GetConfigRaw
	I1004 03:18:29.769131   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:29.769285   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:29.769480   30630 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 03:18:29.769497   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:29.770831   30630 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 03:18:29.770850   30630 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 03:18:29.770858   30630 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 03:18:29.770868   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:29.772990   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.773299   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.773321   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.773460   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:29.773635   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.773787   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.773964   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:29.774099   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:29.774324   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:29.774336   30630 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 03:18:29.870824   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:18:29.870852   30630 main.go:141] libmachine: Detecting the provisioner...
	I1004 03:18:29.870864   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:29.873067   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.873430   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.873464   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.873650   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:29.873816   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.873947   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.874038   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:29.874214   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:29.874367   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:29.874377   30630 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 03:18:29.972554   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 03:18:29.972627   30630 main.go:141] libmachine: found compatible host: buildroot
	I1004 03:18:29.972634   30630 main.go:141] libmachine: Provisioning with buildroot...
	I1004 03:18:29.972640   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:29.972883   30630 buildroot.go:166] provisioning hostname "ha-994751"
	I1004 03:18:29.972906   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:29.973092   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:29.975627   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.976040   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:29.976059   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:29.976197   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:29.976336   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.976489   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:29.976626   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:29.976745   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:29.976951   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:29.976969   30630 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-994751 && echo "ha-994751" | sudo tee /etc/hostname
	I1004 03:18:30.090454   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751
	
	I1004 03:18:30.090480   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.094372   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.094783   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.094812   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.094993   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.095167   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.095331   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.095446   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.095586   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:30.095799   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:30.095822   30630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-994751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-994751/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-994751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:18:30.200998   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:18:30.201031   30630 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:18:30.201106   30630 buildroot.go:174] setting up certificates
	I1004 03:18:30.201120   30630 provision.go:84] configureAuth start
	I1004 03:18:30.201131   30630 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:18:30.201353   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:30.203920   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.204369   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.204390   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.204563   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.206770   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.207168   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.207195   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.207325   30630 provision.go:143] copyHostCerts
	I1004 03:18:30.207355   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:18:30.207398   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:18:30.207407   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:18:30.207474   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:18:30.207553   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:18:30.207574   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:18:30.207581   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:18:30.207605   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:18:30.207644   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:18:30.207661   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:18:30.207671   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:18:30.207691   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:18:30.207739   30630 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.ha-994751 san=[127.0.0.1 192.168.39.65 ha-994751 localhost minikube]
	I1004 03:18:30.399105   30630 provision.go:177] copyRemoteCerts
	I1004 03:18:30.399156   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:18:30.399185   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.401949   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.402239   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.402273   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.402458   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.402612   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.402732   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.402824   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:30.481271   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:18:30.481342   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:18:30.505491   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:18:30.505567   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:18:30.528533   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:18:30.528602   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1004 03:18:30.551611   30630 provision.go:87] duration metric: took 350.480163ms to configureAuth
	I1004 03:18:30.551641   30630 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:18:30.551807   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:18:30.551909   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.554312   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.554641   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.554668   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.554833   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.554998   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.555138   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.555257   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.555398   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:30.555570   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:30.555585   30630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:18:30.762357   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:18:30.762381   30630 main.go:141] libmachine: Checking connection to Docker...
	I1004 03:18:30.762388   30630 main.go:141] libmachine: (ha-994751) Calling .GetURL
	I1004 03:18:30.763606   30630 main.go:141] libmachine: (ha-994751) DBG | Using libvirt version 6000000
	I1004 03:18:30.765692   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.766020   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.766048   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.766206   30630 main.go:141] libmachine: Docker is up and running!
	I1004 03:18:30.766228   30630 main.go:141] libmachine: Reticulating splines...
	I1004 03:18:30.766236   30630 client.go:171] duration metric: took 24.951657625s to LocalClient.Create
	I1004 03:18:30.766258   30630 start.go:167] duration metric: took 24.951708327s to libmachine.API.Create "ha-994751"
	I1004 03:18:30.766279   30630 start.go:293] postStartSetup for "ha-994751" (driver="kvm2")
	I1004 03:18:30.766291   30630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:18:30.766310   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.766550   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:18:30.766573   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.768581   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.768893   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.768918   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.769018   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.769215   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.769374   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.769501   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:30.850107   30630 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:18:30.854350   30630 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:18:30.854372   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:18:30.854448   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:18:30.854554   30630 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:18:30.854567   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:18:30.854687   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:18:30.863939   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:18:30.887968   30630 start.go:296] duration metric: took 121.677235ms for postStartSetup
	I1004 03:18:30.888032   30630 main.go:141] libmachine: (ha-994751) Calling .GetConfigRaw
	I1004 03:18:30.888647   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:30.891188   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.891538   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.891578   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.891766   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:30.891959   30630 start.go:128] duration metric: took 25.095424862s to createHost
	I1004 03:18:30.891980   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.894352   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.894614   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.894640   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.894753   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.894910   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.895041   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.895137   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:30.895264   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:18:30.895466   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:18:30.895480   30630 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:18:30.992599   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011910.970126057
	
	I1004 03:18:30.992618   30630 fix.go:216] guest clock: 1728011910.970126057
	I1004 03:18:30.992625   30630 fix.go:229] Guest: 2024-10-04 03:18:30.970126057 +0000 UTC Remote: 2024-10-04 03:18:30.89197094 +0000 UTC m=+25.204801944 (delta=78.155117ms)
	I1004 03:18:30.992662   30630 fix.go:200] guest clock delta is within tolerance: 78.155117ms
	I1004 03:18:30.992667   30630 start.go:83] releasing machines lock for "ha-994751", held for 25.19620396s
	I1004 03:18:30.992685   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.992896   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:30.995326   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.995629   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.995653   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.995813   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.996311   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.996458   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:30.996541   30630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:18:30.996578   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.996668   30630 ssh_runner.go:195] Run: cat /version.json
	I1004 03:18:30.996687   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:30.999188   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999227   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999574   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.999599   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999648   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:30.999673   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:30.999727   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:30.999923   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:30.999936   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:31.000065   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:31.000137   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:31.000197   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:31.000242   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:31.000338   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:31.092724   30630 ssh_runner.go:195] Run: systemctl --version
	I1004 03:18:31.098738   30630 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:18:31.257592   30630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 03:18:31.263326   30630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:18:31.263402   30630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:18:31.278780   30630 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 03:18:31.278800   30630 start.go:495] detecting cgroup driver to use...
	I1004 03:18:31.278866   30630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:18:31.295874   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:18:31.310006   30630 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:18:31.310076   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:18:31.323189   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:18:31.336586   30630 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:18:31.452424   30630 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:18:31.611505   30630 docker.go:233] disabling docker service ...
	I1004 03:18:31.611576   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:18:31.625795   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:18:31.640666   30630 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:18:31.774429   30630 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:18:31.903530   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:18:31.917157   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:18:31.935039   30630 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:18:31.935118   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.945550   30630 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:18:31.945617   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.955961   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.966381   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.976764   30630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:18:31.987308   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:31.997608   30630 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:32.014334   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:18:32.025406   30630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:18:32.035105   30630 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 03:18:32.035157   30630 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 03:18:32.048803   30630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:18:32.058421   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:18:32.175897   30630 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:18:32.272377   30630 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:18:32.272435   30630 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:18:32.277743   30630 start.go:563] Will wait 60s for crictl version
	I1004 03:18:32.277805   30630 ssh_runner.go:195] Run: which crictl
	I1004 03:18:32.281362   30630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:18:32.318848   30630 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:18:32.318925   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:18:32.346909   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:18:32.375477   30630 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:18:32.376825   30630 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:18:32.379208   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:32.379571   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:32.379594   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:32.379801   30630 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:18:32.384207   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:18:32.397053   30630 kubeadm.go:883] updating cluster {Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 03:18:32.397153   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:18:32.397223   30630 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:18:32.434648   30630 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 03:18:32.434703   30630 ssh_runner.go:195] Run: which lz4
	I1004 03:18:32.438603   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1004 03:18:32.438682   30630 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 03:18:32.442788   30630 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 03:18:32.442821   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 03:18:33.747633   30630 crio.go:462] duration metric: took 1.308983475s to copy over tarball
	I1004 03:18:33.747699   30630 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 03:18:35.713127   30630 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.965391744s)
	I1004 03:18:35.713157   30630 crio.go:469] duration metric: took 1.965495286s to extract the tarball
	I1004 03:18:35.713167   30630 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 03:18:35.749886   30630 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:18:35.795226   30630 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:18:35.795249   30630 cache_images.go:84] Images are preloaded, skipping loading
	I1004 03:18:35.795257   30630 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.31.1 crio true true} ...
	I1004 03:18:35.795346   30630 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-994751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:18:35.795408   30630 ssh_runner.go:195] Run: crio config
	I1004 03:18:35.841695   30630 cni.go:84] Creating CNI manager for ""
	I1004 03:18:35.841718   30630 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1004 03:18:35.841728   30630 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 03:18:35.841746   30630 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-994751 NodeName:ha-994751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 03:18:35.841868   30630 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-994751"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 03:18:35.841893   30630 kube-vip.go:115] generating kube-vip config ...
	I1004 03:18:35.841933   30630 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1004 03:18:35.858111   30630 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1004 03:18:35.858218   30630 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:18:35.858274   30630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:18:35.867809   30630 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 03:18:35.867872   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1004 03:18:35.876830   30630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1004 03:18:35.892172   30630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:18:35.907631   30630 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1004 03:18:35.923147   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1004 03:18:35.939242   30630 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:18:35.943241   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:18:35.955036   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:18:36.063830   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:18:36.080131   30630 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751 for IP: 192.168.39.65
	I1004 03:18:36.080153   30630 certs.go:194] generating shared ca certs ...
	I1004 03:18:36.080169   30630 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.080303   30630 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:18:36.080336   30630 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:18:36.080345   30630 certs.go:256] generating profile certs ...
	I1004 03:18:36.080388   30630 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key
	I1004 03:18:36.080414   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt with IP's: []
	I1004 03:18:36.205325   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt ...
	I1004 03:18:36.205354   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt: {Name:mk097459d54d355cf05d74a196b72b51ed16216c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.205539   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key ...
	I1004 03:18:36.205553   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key: {Name:mka6efef398570320df79b26ee2d84116b88400b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.205628   30630 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35
	I1004 03:18:36.205642   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65 192.168.39.254]
	I1004 03:18:36.278398   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35 ...
	I1004 03:18:36.278426   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35: {Name:mk5a54fedcb658e02d5a59c4cc7f959d0efc3b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.278574   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35 ...
	I1004 03:18:36.278586   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35: {Name:mk30bcb47c9e314eff3c9b6a3bb1c1b8ba019417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.278653   30630 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.211fcd35 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt
	I1004 03:18:36.278741   30630 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.211fcd35 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key
	I1004 03:18:36.278802   30630 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key
	I1004 03:18:36.278825   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt with IP's: []
	I1004 03:18:36.411462   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt ...
	I1004 03:18:36.411499   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt: {Name:mk5cbb9b0a13c8121c937d53956001313fc362d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.411652   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key ...
	I1004 03:18:36.411663   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key: {Name:mkcfa953ddb2aa55fb392dd2b0300dc4d7ed9a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:36.411729   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:18:36.411745   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:18:36.411758   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:18:36.411771   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:18:36.411798   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:18:36.411811   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:18:36.411823   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:18:36.411835   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:18:36.411884   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:18:36.411919   30630 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:18:36.411928   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:18:36.411953   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:18:36.411976   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:18:36.411996   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:18:36.412030   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:18:36.412053   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.412066   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.412078   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.412548   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:18:36.441146   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:18:36.468175   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:18:36.494488   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:18:36.520930   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1004 03:18:36.546306   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 03:18:36.571622   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:18:36.595650   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:18:36.619154   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:18:36.643284   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:18:36.666998   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:18:36.692308   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 03:18:36.710569   30630 ssh_runner.go:195] Run: openssl version
	I1004 03:18:36.722532   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:18:36.738971   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.743511   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.743568   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:18:36.749416   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:18:36.760315   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:18:36.771516   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.776032   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.776090   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:18:36.781784   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:18:36.792883   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:18:36.804051   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.808536   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.808596   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:18:36.814260   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:18:36.827637   30630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:18:36.833576   30630 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 03:18:36.833628   30630 kubeadm.go:392] StartCluster: {Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:18:36.833720   30630 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 03:18:36.833768   30630 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 03:18:36.890855   30630 cri.go:89] found id: ""
	I1004 03:18:36.890927   30630 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 03:18:36.902870   30630 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 03:18:36.912801   30630 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 03:18:36.922312   30630 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 03:18:36.922332   30630 kubeadm.go:157] found existing configuration files:
	
	I1004 03:18:36.922378   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 03:18:36.931373   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 03:18:36.931434   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 03:18:36.940992   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 03:18:36.949951   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 03:18:36.950008   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 03:18:36.959253   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 03:18:36.968235   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 03:18:36.968290   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 03:18:36.977554   30630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 03:18:36.986351   30630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 03:18:36.986408   30630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 03:18:36.995719   30630 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 03:18:37.089352   30630 kubeadm.go:310] W1004 03:18:37.073375     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 03:18:37.090411   30630 kubeadm.go:310] W1004 03:18:37.074383     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 03:18:37.191769   30630 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 03:18:47.918991   30630 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 03:18:47.919112   30630 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 03:18:47.919261   30630 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 03:18:47.919365   30630 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 03:18:47.919464   30630 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 03:18:47.919518   30630 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 03:18:47.920818   30630 out.go:235]   - Generating certificates and keys ...
	I1004 03:18:47.920882   30630 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 03:18:47.920936   30630 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 03:18:47.921009   30630 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 03:18:47.921075   30630 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1004 03:18:47.921133   30630 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1004 03:18:47.921203   30630 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1004 03:18:47.921280   30630 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1004 03:18:47.921443   30630 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-994751 localhost] and IPs [192.168.39.65 127.0.0.1 ::1]
	I1004 03:18:47.921519   30630 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1004 03:18:47.921666   30630 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-994751 localhost] and IPs [192.168.39.65 127.0.0.1 ::1]
	I1004 03:18:47.921762   30630 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 03:18:47.921849   30630 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 03:18:47.921910   30630 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1004 03:18:47.922005   30630 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 03:18:47.922057   30630 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 03:18:47.922112   30630 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 03:18:47.922177   30630 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 03:18:47.922290   30630 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 03:18:47.922377   30630 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 03:18:47.922447   30630 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 03:18:47.922519   30630 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 03:18:47.923983   30630 out.go:235]   - Booting up control plane ...
	I1004 03:18:47.924085   30630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 03:18:47.924153   30630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 03:18:47.924208   30630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 03:18:47.924334   30630 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 03:18:47.924425   30630 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 03:18:47.924472   30630 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 03:18:47.924582   30630 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 03:18:47.924675   30630 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 03:18:47.924735   30630 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001267899s
	I1004 03:18:47.924846   30630 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 03:18:47.924901   30630 kubeadm.go:310] [api-check] The API server is healthy after 5.62627754s
	I1004 03:18:47.924992   30630 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 03:18:47.925097   30630 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 03:18:47.925151   30630 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 03:18:47.925310   30630 kubeadm.go:310] [mark-control-plane] Marking the node ha-994751 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 03:18:47.925388   30630 kubeadm.go:310] [bootstrap-token] Using token: t8dola.kmwzcq881z4dnfcq
	I1004 03:18:47.926624   30630 out.go:235]   - Configuring RBAC rules ...
	I1004 03:18:47.926738   30630 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 03:18:47.926809   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 03:18:47.926957   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 03:18:47.927140   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 03:18:47.927310   30630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 03:18:47.927398   30630 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 03:18:47.927508   30630 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 03:18:47.927559   30630 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 03:18:47.927607   30630 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 03:18:47.927613   30630 kubeadm.go:310] 
	I1004 03:18:47.927661   30630 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 03:18:47.927667   30630 kubeadm.go:310] 
	I1004 03:18:47.927736   30630 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 03:18:47.927742   30630 kubeadm.go:310] 
	I1004 03:18:47.927764   30630 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 03:18:47.927863   30630 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 03:18:47.927918   30630 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 03:18:47.927926   30630 kubeadm.go:310] 
	I1004 03:18:47.927996   30630 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 03:18:47.928006   30630 kubeadm.go:310] 
	I1004 03:18:47.928052   30630 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 03:18:47.928059   30630 kubeadm.go:310] 
	I1004 03:18:47.928102   30630 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 03:18:47.928189   30630 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 03:18:47.928261   30630 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 03:18:47.928268   30630 kubeadm.go:310] 
	I1004 03:18:47.928337   30630 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 03:18:47.928401   30630 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 03:18:47.928407   30630 kubeadm.go:310] 
	I1004 03:18:47.928480   30630 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t8dola.kmwzcq881z4dnfcq \
	I1004 03:18:47.928565   30630 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 \
	I1004 03:18:47.928587   30630 kubeadm.go:310] 	--control-plane 
	I1004 03:18:47.928593   30630 kubeadm.go:310] 
	I1004 03:18:47.928677   30630 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 03:18:47.928689   30630 kubeadm.go:310] 
	I1004 03:18:47.928756   30630 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t8dola.kmwzcq881z4dnfcq \
	I1004 03:18:47.928856   30630 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 
	I1004 03:18:47.928865   30630 cni.go:84] Creating CNI manager for ""
	I1004 03:18:47.928870   30630 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1004 03:18:47.930177   30630 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1004 03:18:47.931356   30630 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1004 03:18:47.936846   30630 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1004 03:18:47.936861   30630 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1004 03:18:47.954946   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 03:18:48.341839   30630 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 03:18:48.341927   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-994751 minikube.k8s.io/updated_at=2024_10_04T03_18_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-994751 minikube.k8s.io/primary=true
	I1004 03:18:48.341931   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:48.378883   30630 ops.go:34] apiserver oom_adj: -16
	I1004 03:18:48.535248   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:49.035895   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:49.535506   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:50.036160   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:50.536177   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:51.036074   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:51.535453   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:52.036318   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 03:18:52.141351   30630 kubeadm.go:1113] duration metric: took 3.799503635s to wait for elevateKubeSystemPrivileges
	I1004 03:18:52.141482   30630 kubeadm.go:394] duration metric: took 15.307852794s to StartCluster
	I1004 03:18:52.141506   30630 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:52.141595   30630 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:18:52.142340   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:18:52.142543   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 03:18:52.142540   30630 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:18:52.142619   30630 start.go:241] waiting for startup goroutines ...
	I1004 03:18:52.142559   30630 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 03:18:52.142650   30630 addons.go:69] Setting default-storageclass=true in profile "ha-994751"
	I1004 03:18:52.142673   30630 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-994751"
	I1004 03:18:52.142653   30630 addons.go:69] Setting storage-provisioner=true in profile "ha-994751"
	I1004 03:18:52.142785   30630 addons.go:234] Setting addon storage-provisioner=true in "ha-994751"
	I1004 03:18:52.142836   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:18:52.142751   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:18:52.143105   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.143135   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.143203   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.143243   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.158739   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38855
	I1004 03:18:52.159139   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.159746   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.159801   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.160123   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.160704   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.160750   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.163696   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35851
	I1004 03:18:52.164259   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.164849   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.164876   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.165236   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.165397   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:52.167571   30630 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:18:52.167892   30630 kapi.go:59] client config for ha-994751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key", CAFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 03:18:52.168431   30630 cert_rotation.go:140] Starting client certificate rotation controller
	I1004 03:18:52.168621   30630 addons.go:234] Setting addon default-storageclass=true in "ha-994751"
	I1004 03:18:52.168661   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:18:52.168962   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.168995   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.177647   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33667
	I1004 03:18:52.178272   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.178780   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.178807   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.179185   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.179369   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:52.181245   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:52.182949   30630 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 03:18:52.184312   30630 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 03:18:52.184328   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 03:18:52.184342   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:52.185802   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36897
	I1004 03:18:52.186249   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.186707   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.186731   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.187053   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.187403   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.187660   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:52.187699   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:52.187838   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:52.187860   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.188023   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:52.188171   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:52.188318   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:52.188522   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:52.202680   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36473
	I1004 03:18:52.203159   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:52.203886   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:52.203918   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:52.204247   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:52.204428   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:18:52.205967   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:18:52.206173   30630 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 03:18:52.206189   30630 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 03:18:52.206206   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:18:52.208832   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.209269   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:18:52.209304   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:18:52.209405   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:18:52.209567   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:18:52.209709   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:18:52.209838   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:18:52.346822   30630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 03:18:52.355141   30630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 03:18:52.371008   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 03:18:52.715722   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.715742   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.716027   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.716068   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.716084   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.716095   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.716104   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.716350   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.716358   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.716370   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.716432   30630 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1004 03:18:52.716457   30630 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1004 03:18:52.716568   30630 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1004 03:18:52.716579   30630 round_trippers.go:469] Request Headers:
	I1004 03:18:52.716592   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:18:52.716603   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:18:52.723900   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:18:52.724457   30630 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1004 03:18:52.724472   30630 round_trippers.go:469] Request Headers:
	I1004 03:18:52.724481   30630 round_trippers.go:473]     Content-Type: application/json
	I1004 03:18:52.724485   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:18:52.724494   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:18:52.728158   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:18:52.728358   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.728379   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.728631   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.728667   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.728678   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.991032   30630 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1004 03:18:52.991106   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.991118   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.991464   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.991518   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.991525   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.991538   30630 main.go:141] libmachine: Making call to close driver server
	I1004 03:18:52.991549   30630 main.go:141] libmachine: (ha-994751) Calling .Close
	I1004 03:18:52.991787   30630 main.go:141] libmachine: (ha-994751) DBG | Closing plugin on server side
	I1004 03:18:52.991815   30630 main.go:141] libmachine: Successfully made call to close driver server
	I1004 03:18:52.991835   30630 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 03:18:52.993564   30630 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1004 03:18:52.994914   30630 addons.go:510] duration metric: took 852.347466ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1004 03:18:52.994963   30630 start.go:246] waiting for cluster config update ...
	I1004 03:18:52.994978   30630 start.go:255] writing updated cluster config ...
	I1004 03:18:52.996475   30630 out.go:201] 
	I1004 03:18:52.997828   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:18:52.997937   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:52.999684   30630 out.go:177] * Starting "ha-994751-m02" control-plane node in "ha-994751" cluster
	I1004 03:18:53.001098   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:18:53.001129   30630 cache.go:56] Caching tarball of preloaded images
	I1004 03:18:53.001252   30630 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:18:53.001270   30630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:18:53.001389   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:18:53.001704   30630 start.go:360] acquireMachinesLock for ha-994751-m02: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:18:53.001767   30630 start.go:364] duration metric: took 36.717µs to acquireMachinesLock for "ha-994751-m02"
	I1004 03:18:53.001788   30630 start.go:93] Provisioning new machine with config: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:18:53.001888   30630 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1004 03:18:53.003601   30630 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 03:18:53.003685   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:18:53.003710   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:18:53.018286   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I1004 03:18:53.018739   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:18:53.019227   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:18:53.019248   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:18:53.019586   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:18:53.019746   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:18:53.019878   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:18:53.020036   30630 start.go:159] libmachine.API.Create for "ha-994751" (driver="kvm2")
	I1004 03:18:53.020058   30630 client.go:168] LocalClient.Create starting
	I1004 03:18:53.020084   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 03:18:53.020121   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:53.020141   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:53.020189   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 03:18:53.020206   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:18:53.020216   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:18:53.020231   30630 main.go:141] libmachine: Running pre-create checks...
	I1004 03:18:53.020238   30630 main.go:141] libmachine: (ha-994751-m02) Calling .PreCreateCheck
	I1004 03:18:53.020407   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetConfigRaw
	I1004 03:18:53.020742   30630 main.go:141] libmachine: Creating machine...
	I1004 03:18:53.020759   30630 main.go:141] libmachine: (ha-994751-m02) Calling .Create
	I1004 03:18:53.020907   30630 main.go:141] libmachine: (ha-994751-m02) Creating KVM machine...
	I1004 03:18:53.022100   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found existing default KVM network
	I1004 03:18:53.022275   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found existing private KVM network mk-ha-994751
	I1004 03:18:53.022411   30630 main.go:141] libmachine: (ha-994751-m02) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02 ...
	I1004 03:18:53.022435   30630 main.go:141] libmachine: (ha-994751-m02) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 03:18:53.022495   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.022407   31016 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:53.022574   30630 main.go:141] libmachine: (ha-994751-m02) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 03:18:53.247842   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.247679   31016 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa...
	I1004 03:18:53.574709   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.574567   31016 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/ha-994751-m02.rawdisk...
	I1004 03:18:53.574744   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Writing magic tar header
	I1004 03:18:53.574759   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Writing SSH key tar header
	I1004 03:18:53.574776   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:53.574706   31016 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02 ...
	I1004 03:18:53.574856   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02
	I1004 03:18:53.574886   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02 (perms=drwx------)
	I1004 03:18:53.574896   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 03:18:53.574906   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 03:18:53.574926   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 03:18:53.574938   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 03:18:53.574962   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 03:18:53.574971   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:18:53.574979   30630 main.go:141] libmachine: (ha-994751-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 03:18:53.574992   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 03:18:53.575005   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 03:18:53.575014   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home/jenkins
	I1004 03:18:53.575020   30630 main.go:141] libmachine: (ha-994751-m02) Creating domain...
	I1004 03:18:53.575036   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Checking permissions on dir: /home
	I1004 03:18:53.575046   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Skipping /home - not owner
	I1004 03:18:53.575952   30630 main.go:141] libmachine: (ha-994751-m02) define libvirt domain using xml: 
	I1004 03:18:53.575978   30630 main.go:141] libmachine: (ha-994751-m02) <domain type='kvm'>
	I1004 03:18:53.575998   30630 main.go:141] libmachine: (ha-994751-m02)   <name>ha-994751-m02</name>
	I1004 03:18:53.576012   30630 main.go:141] libmachine: (ha-994751-m02)   <memory unit='MiB'>2200</memory>
	I1004 03:18:53.576021   30630 main.go:141] libmachine: (ha-994751-m02)   <vcpu>2</vcpu>
	I1004 03:18:53.576030   30630 main.go:141] libmachine: (ha-994751-m02)   <features>
	I1004 03:18:53.576038   30630 main.go:141] libmachine: (ha-994751-m02)     <acpi/>
	I1004 03:18:53.576047   30630 main.go:141] libmachine: (ha-994751-m02)     <apic/>
	I1004 03:18:53.576055   30630 main.go:141] libmachine: (ha-994751-m02)     <pae/>
	I1004 03:18:53.576064   30630 main.go:141] libmachine: (ha-994751-m02)     
	I1004 03:18:53.576072   30630 main.go:141] libmachine: (ha-994751-m02)   </features>
	I1004 03:18:53.576082   30630 main.go:141] libmachine: (ha-994751-m02)   <cpu mode='host-passthrough'>
	I1004 03:18:53.576089   30630 main.go:141] libmachine: (ha-994751-m02)   
	I1004 03:18:53.576099   30630 main.go:141] libmachine: (ha-994751-m02)   </cpu>
	I1004 03:18:53.576106   30630 main.go:141] libmachine: (ha-994751-m02)   <os>
	I1004 03:18:53.576119   30630 main.go:141] libmachine: (ha-994751-m02)     <type>hvm</type>
	I1004 03:18:53.576130   30630 main.go:141] libmachine: (ha-994751-m02)     <boot dev='cdrom'/>
	I1004 03:18:53.576135   30630 main.go:141] libmachine: (ha-994751-m02)     <boot dev='hd'/>
	I1004 03:18:53.576144   30630 main.go:141] libmachine: (ha-994751-m02)     <bootmenu enable='no'/>
	I1004 03:18:53.576152   30630 main.go:141] libmachine: (ha-994751-m02)   </os>
	I1004 03:18:53.576165   30630 main.go:141] libmachine: (ha-994751-m02)   <devices>
	I1004 03:18:53.576176   30630 main.go:141] libmachine: (ha-994751-m02)     <disk type='file' device='cdrom'>
	I1004 03:18:53.576189   30630 main.go:141] libmachine: (ha-994751-m02)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/boot2docker.iso'/>
	I1004 03:18:53.576200   30630 main.go:141] libmachine: (ha-994751-m02)       <target dev='hdc' bus='scsi'/>
	I1004 03:18:53.576208   30630 main.go:141] libmachine: (ha-994751-m02)       <readonly/>
	I1004 03:18:53.576216   30630 main.go:141] libmachine: (ha-994751-m02)     </disk>
	I1004 03:18:53.576224   30630 main.go:141] libmachine: (ha-994751-m02)     <disk type='file' device='disk'>
	I1004 03:18:53.576236   30630 main.go:141] libmachine: (ha-994751-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 03:18:53.576251   30630 main.go:141] libmachine: (ha-994751-m02)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/ha-994751-m02.rawdisk'/>
	I1004 03:18:53.576261   30630 main.go:141] libmachine: (ha-994751-m02)       <target dev='hda' bus='virtio'/>
	I1004 03:18:53.576285   30630 main.go:141] libmachine: (ha-994751-m02)     </disk>
	I1004 03:18:53.576307   30630 main.go:141] libmachine: (ha-994751-m02)     <interface type='network'>
	I1004 03:18:53.576317   30630 main.go:141] libmachine: (ha-994751-m02)       <source network='mk-ha-994751'/>
	I1004 03:18:53.576324   30630 main.go:141] libmachine: (ha-994751-m02)       <model type='virtio'/>
	I1004 03:18:53.576335   30630 main.go:141] libmachine: (ha-994751-m02)     </interface>
	I1004 03:18:53.576342   30630 main.go:141] libmachine: (ha-994751-m02)     <interface type='network'>
	I1004 03:18:53.576368   30630 main.go:141] libmachine: (ha-994751-m02)       <source network='default'/>
	I1004 03:18:53.576386   30630 main.go:141] libmachine: (ha-994751-m02)       <model type='virtio'/>
	I1004 03:18:53.576401   30630 main.go:141] libmachine: (ha-994751-m02)     </interface>
	I1004 03:18:53.576413   30630 main.go:141] libmachine: (ha-994751-m02)     <serial type='pty'>
	I1004 03:18:53.576421   30630 main.go:141] libmachine: (ha-994751-m02)       <target port='0'/>
	I1004 03:18:53.576429   30630 main.go:141] libmachine: (ha-994751-m02)     </serial>
	I1004 03:18:53.576437   30630 main.go:141] libmachine: (ha-994751-m02)     <console type='pty'>
	I1004 03:18:53.576447   30630 main.go:141] libmachine: (ha-994751-m02)       <target type='serial' port='0'/>
	I1004 03:18:53.576455   30630 main.go:141] libmachine: (ha-994751-m02)     </console>
	I1004 03:18:53.576462   30630 main.go:141] libmachine: (ha-994751-m02)     <rng model='virtio'>
	I1004 03:18:53.576468   30630 main.go:141] libmachine: (ha-994751-m02)       <backend model='random'>/dev/random</backend>
	I1004 03:18:53.576474   30630 main.go:141] libmachine: (ha-994751-m02)     </rng>
	I1004 03:18:53.576479   30630 main.go:141] libmachine: (ha-994751-m02)     
	I1004 03:18:53.576482   30630 main.go:141] libmachine: (ha-994751-m02)     
	I1004 03:18:53.576487   30630 main.go:141] libmachine: (ha-994751-m02)   </devices>
	I1004 03:18:53.576497   30630 main.go:141] libmachine: (ha-994751-m02) </domain>
	I1004 03:18:53.576508   30630 main.go:141] libmachine: (ha-994751-m02) 
	I1004 03:18:53.583962   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:dd:b1:40 in network default
	I1004 03:18:53.584709   30630 main.go:141] libmachine: (ha-994751-m02) Ensuring networks are active...
	I1004 03:18:53.584740   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:53.585441   30630 main.go:141] libmachine: (ha-994751-m02) Ensuring network default is active
	I1004 03:18:53.585785   30630 main.go:141] libmachine: (ha-994751-m02) Ensuring network mk-ha-994751 is active
	I1004 03:18:53.586177   30630 main.go:141] libmachine: (ha-994751-m02) Getting domain xml...
	I1004 03:18:53.586870   30630 main.go:141] libmachine: (ha-994751-m02) Creating domain...
	I1004 03:18:54.836669   30630 main.go:141] libmachine: (ha-994751-m02) Waiting to get IP...
	I1004 03:18:54.837648   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:54.838068   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:54.838093   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:54.838048   31016 retry.go:31] will retry after 198.927613ms: waiting for machine to come up
	I1004 03:18:55.038453   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:55.038905   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:55.039050   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:55.039003   31016 retry.go:31] will retry after 306.415928ms: waiting for machine to come up
	I1004 03:18:55.347491   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:55.347913   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:55.347941   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:55.347876   31016 retry.go:31] will retry after 320.808758ms: waiting for machine to come up
	I1004 03:18:55.670381   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:55.670806   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:55.670832   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:55.670773   31016 retry.go:31] will retry after 393.714723ms: waiting for machine to come up
	I1004 03:18:56.066334   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:56.066789   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:56.066816   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:56.066737   31016 retry.go:31] will retry after 703.186123ms: waiting for machine to come up
	I1004 03:18:56.771284   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:56.771771   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:56.771816   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:56.771717   31016 retry.go:31] will retry after 687.11987ms: waiting for machine to come up
	I1004 03:18:57.460710   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:57.461089   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:57.461132   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:57.461080   31016 retry.go:31] will retry after 992.439827ms: waiting for machine to come up
	I1004 03:18:58.455669   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:58.456094   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:58.456109   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:58.456062   31016 retry.go:31] will retry after 1.176479657s: waiting for machine to come up
	I1004 03:18:59.634390   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:18:59.634814   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:18:59.634839   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:18:59.634775   31016 retry.go:31] will retry after 1.214254179s: waiting for machine to come up
	I1004 03:19:00.850238   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:00.850699   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:00.850731   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:00.850669   31016 retry.go:31] will retry after 1.755607467s: waiting for machine to come up
	I1004 03:19:02.608547   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:02.608946   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:02.608966   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:02.608910   31016 retry.go:31] will retry after 1.912286614s: waiting for machine to come up
	I1004 03:19:04.522463   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:04.522888   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:04.522917   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:04.522826   31016 retry.go:31] will retry after 2.242710645s: waiting for machine to come up
	I1004 03:19:06.766980   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:06.767510   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:06.767541   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:06.767449   31016 retry.go:31] will retry after 3.842874805s: waiting for machine to come up
	I1004 03:19:10.612857   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:10.613334   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find current IP address of domain ha-994751-m02 in network mk-ha-994751
	I1004 03:19:10.613359   30630 main.go:141] libmachine: (ha-994751-m02) DBG | I1004 03:19:10.613293   31016 retry.go:31] will retry after 4.05361864s: waiting for machine to come up
	I1004 03:19:14.669514   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.670029   30630 main.go:141] libmachine: (ha-994751-m02) Found IP for machine: 192.168.39.117
	I1004 03:19:14.670051   30630 main.go:141] libmachine: (ha-994751-m02) Reserving static IP address...
	I1004 03:19:14.670068   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has current primary IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.670622   30630 main.go:141] libmachine: (ha-994751-m02) DBG | unable to find host DHCP lease matching {name: "ha-994751-m02", mac: "52:54:00:b0:e7:80", ip: "192.168.39.117"} in network mk-ha-994751
	I1004 03:19:14.745981   30630 main.go:141] libmachine: (ha-994751-m02) Reserved static IP address: 192.168.39.117
	I1004 03:19:14.746008   30630 main.go:141] libmachine: (ha-994751-m02) Waiting for SSH to be available...
	I1004 03:19:14.746017   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Getting to WaitForSSH function...
	I1004 03:19:14.748804   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.749281   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:14.749310   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.749511   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Using SSH client type: external
	I1004 03:19:14.749551   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa (-rw-------)
	I1004 03:19:14.749581   30630 main.go:141] libmachine: (ha-994751-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:19:14.749606   30630 main.go:141] libmachine: (ha-994751-m02) DBG | About to run SSH command:
	I1004 03:19:14.749624   30630 main.go:141] libmachine: (ha-994751-m02) DBG | exit 0
	I1004 03:19:14.876139   30630 main.go:141] libmachine: (ha-994751-m02) DBG | SSH cmd err, output: <nil>: 
	I1004 03:19:14.876447   30630 main.go:141] libmachine: (ha-994751-m02) KVM machine creation complete!
	I1004 03:19:14.876809   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetConfigRaw
	I1004 03:19:14.877356   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:14.877589   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:14.877768   30630 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 03:19:14.877780   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetState
	I1004 03:19:14.879122   30630 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 03:19:14.879138   30630 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 03:19:14.879143   30630 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 03:19:14.879149   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:14.881593   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.881953   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:14.881980   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.882095   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:14.882322   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.882470   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.882643   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:14.882838   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:14.883073   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:14.883086   30630 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 03:19:14.983285   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:19:14.983306   30630 main.go:141] libmachine: Detecting the provisioner...
	I1004 03:19:14.983312   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:14.986285   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.986741   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:14.986757   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:14.987055   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:14.987278   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.987439   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:14.987656   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:14.987873   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:14.988031   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:14.988042   30630 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 03:19:15.088950   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 03:19:15.089011   30630 main.go:141] libmachine: found compatible host: buildroot
	I1004 03:19:15.089017   30630 main.go:141] libmachine: Provisioning with buildroot...
	I1004 03:19:15.089024   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:19:15.089254   30630 buildroot.go:166] provisioning hostname "ha-994751-m02"
	I1004 03:19:15.089274   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:19:15.089431   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.092470   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.092890   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.092918   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.093111   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.093289   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.093421   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.093532   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.093663   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:15.093819   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:15.093835   30630 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-994751-m02 && echo "ha-994751-m02" | sudo tee /etc/hostname
	I1004 03:19:15.206985   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751-m02
	
	I1004 03:19:15.207013   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.210129   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.210417   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.210457   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.210609   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.210806   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.210951   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.211140   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.211322   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:15.211488   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:15.211503   30630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-994751-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-994751-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-994751-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:19:15.321696   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:19:15.321728   30630 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:19:15.321748   30630 buildroot.go:174] setting up certificates
	I1004 03:19:15.321761   30630 provision.go:84] configureAuth start
	I1004 03:19:15.321773   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetMachineName
	I1004 03:19:15.322055   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:15.324655   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.325067   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.325090   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.325226   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.327479   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.327889   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.327929   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.328106   30630 provision.go:143] copyHostCerts
	I1004 03:19:15.328139   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:19:15.328171   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:19:15.328185   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:19:15.328272   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:19:15.328393   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:19:15.328420   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:19:15.328430   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:19:15.328468   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:19:15.328620   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:19:15.328652   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:19:15.328662   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:19:15.328718   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:19:15.328821   30630 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.ha-994751-m02 san=[127.0.0.1 192.168.39.117 ha-994751-m02 localhost minikube]
	I1004 03:19:15.560527   30630 provision.go:177] copyRemoteCerts
	I1004 03:19:15.560590   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:19:15.560612   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.563747   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.564236   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.564307   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.564520   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.564706   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.564861   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.565036   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:15.646851   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:19:15.646919   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 03:19:15.672945   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:19:15.673021   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:19:15.699880   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:19:15.699960   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:19:15.725929   30630 provision.go:87] duration metric: took 404.139584ms to configureAuth
	I1004 03:19:15.725975   30630 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:19:15.726189   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:19:15.726282   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.729150   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.729586   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.729623   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.729761   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.729951   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.730107   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.730283   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.730477   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:15.730682   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:15.730704   30630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:19:15.953783   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:19:15.953808   30630 main.go:141] libmachine: Checking connection to Docker...
	I1004 03:19:15.953817   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetURL
	I1004 03:19:15.955088   30630 main.go:141] libmachine: (ha-994751-m02) DBG | Using libvirt version 6000000
	I1004 03:19:15.957213   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.957617   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.957642   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.957827   30630 main.go:141] libmachine: Docker is up and running!
	I1004 03:19:15.957841   30630 main.go:141] libmachine: Reticulating splines...
	I1004 03:19:15.957847   30630 client.go:171] duration metric: took 22.937783647s to LocalClient.Create
	I1004 03:19:15.957867   30630 start.go:167] duration metric: took 22.937832099s to libmachine.API.Create "ha-994751"
	I1004 03:19:15.957875   30630 start.go:293] postStartSetup for "ha-994751-m02" (driver="kvm2")
	I1004 03:19:15.957884   30630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:19:15.957899   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:15.958102   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:19:15.958124   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:15.960392   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.960717   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:15.960745   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:15.960883   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:15.961062   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:15.961225   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:15.961368   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:16.042404   30630 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:19:16.047363   30630 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:19:16.047388   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:19:16.047468   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:19:16.047535   30630 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:19:16.047546   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:19:16.047622   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:19:16.057062   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:19:16.082885   30630 start.go:296] duration metric: took 124.998047ms for postStartSetup
	I1004 03:19:16.082935   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetConfigRaw
	I1004 03:19:16.083581   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:16.086204   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.086582   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.086605   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.086841   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:19:16.087032   30630 start.go:128] duration metric: took 23.085132614s to createHost
	I1004 03:19:16.087053   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:16.089417   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.089782   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.089807   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.089984   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:16.090129   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.090241   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.090315   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:16.090436   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:19:16.090606   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I1004 03:19:16.090615   30630 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:19:16.192923   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011956.165669680
	
	I1004 03:19:16.192949   30630 fix.go:216] guest clock: 1728011956.165669680
	I1004 03:19:16.192957   30630 fix.go:229] Guest: 2024-10-04 03:19:16.16566968 +0000 UTC Remote: 2024-10-04 03:19:16.08704226 +0000 UTC m=+70.399873263 (delta=78.62742ms)
	I1004 03:19:16.192972   30630 fix.go:200] guest clock delta is within tolerance: 78.62742ms
	I1004 03:19:16.192978   30630 start.go:83] releasing machines lock for "ha-994751-m02", held for 23.191201934s
	I1004 03:19:16.193000   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.193291   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:16.196268   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.196769   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.196799   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.199156   30630 out.go:177] * Found network options:
	I1004 03:19:16.200650   30630 out.go:177]   - NO_PROXY=192.168.39.65
	W1004 03:19:16.201984   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:19:16.202013   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.202608   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.202783   30630 main.go:141] libmachine: (ha-994751-m02) Calling .DriverName
	I1004 03:19:16.202904   30630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:19:16.202945   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	W1004 03:19:16.203033   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:19:16.203114   30630 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:19:16.203136   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHHostname
	I1004 03:19:16.205729   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.205978   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.206109   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.206134   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.206286   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:16.206384   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:16.206425   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:16.206455   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.206610   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:16.206681   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHPort
	I1004 03:19:16.206748   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:16.206786   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHKeyPath
	I1004 03:19:16.206947   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetSSHUsername
	I1004 03:19:16.207052   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m02/id_rsa Username:docker}
	I1004 03:19:16.451088   30630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 03:19:16.457611   30630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:19:16.457679   30630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:19:16.474500   30630 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 03:19:16.474524   30630 start.go:495] detecting cgroup driver to use...
	I1004 03:19:16.474577   30630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:19:16.491337   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:19:16.505852   30630 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:19:16.505915   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:19:16.519394   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:19:16.533389   30630 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:19:16.647440   30630 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:19:16.796026   30630 docker.go:233] disabling docker service ...
	I1004 03:19:16.796090   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:19:16.810390   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:19:16.824447   30630 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:19:16.967078   30630 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:19:17.099949   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:19:17.114752   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:19:17.134460   30630 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:19:17.134514   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.144920   30630 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:19:17.144984   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.155252   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.165315   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.175583   30630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:19:17.186303   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.198678   30630 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.217975   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:19:17.229419   30630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:19:17.241337   30630 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 03:19:17.241386   30630 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 03:19:17.254390   30630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:19:17.264806   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:19:17.402028   30630 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:19:17.495758   30630 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:19:17.495841   30630 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:19:17.500623   30630 start.go:563] Will wait 60s for crictl version
	I1004 03:19:17.500678   30630 ssh_runner.go:195] Run: which crictl
	I1004 03:19:17.504705   30630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:19:17.550368   30630 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:19:17.550468   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:19:17.578910   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:19:17.612824   30630 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:19:17.614302   30630 out.go:177]   - env NO_PROXY=192.168.39.65
	I1004 03:19:17.615583   30630 main.go:141] libmachine: (ha-994751-m02) Calling .GetIP
	I1004 03:19:17.618499   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:17.619022   30630 main.go:141] libmachine: (ha-994751-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e7:80", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:19:08 +0000 UTC Type:0 Mac:52:54:00:b0:e7:80 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-994751-m02 Clientid:01:52:54:00:b0:e7:80}
	I1004 03:19:17.619049   30630 main.go:141] libmachine: (ha-994751-m02) DBG | domain ha-994751-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:b0:e7:80 in network mk-ha-994751
	I1004 03:19:17.619276   30630 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:19:17.623687   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:19:17.636797   30630 mustload.go:65] Loading cluster: ha-994751
	I1004 03:19:17.637003   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:19:17.637273   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:19:17.637322   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:19:17.651836   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38711
	I1004 03:19:17.652278   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:19:17.652784   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:19:17.652801   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:19:17.653111   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:19:17.653311   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:19:17.654878   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:19:17.655231   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:19:17.655273   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:19:17.669844   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I1004 03:19:17.670238   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:19:17.670702   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:19:17.670716   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:19:17.671055   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:19:17.671261   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:19:17.671448   30630 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751 for IP: 192.168.39.117
	I1004 03:19:17.671472   30630 certs.go:194] generating shared ca certs ...
	I1004 03:19:17.671486   30630 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:19:17.671619   30630 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:19:17.671665   30630 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:19:17.671678   30630 certs.go:256] generating profile certs ...
	I1004 03:19:17.671769   30630 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key
	I1004 03:19:17.671816   30630 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb
	I1004 03:19:17.671836   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65 192.168.39.117 192.168.39.254]
	I1004 03:19:17.982961   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb ...
	I1004 03:19:17.982990   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb: {Name:mka857c573044186dc7f952f5b2ab8a540e4e52f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:19:17.983170   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb ...
	I1004 03:19:17.983188   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb: {Name:mka872bfad80f36ccf6cfb0285b019b3212263dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:19:17.983268   30630 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.7edcc3fb -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt
	I1004 03:19:17.983413   30630 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.7edcc3fb -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key
	I1004 03:19:17.983593   30630 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key
	I1004 03:19:17.983610   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:19:17.983628   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:19:17.983649   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:19:17.983666   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:19:17.983685   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:19:17.983700   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:19:17.983717   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:19:17.983736   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:19:17.983821   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:19:17.983865   30630 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:19:17.983877   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:19:17.983909   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:19:17.983943   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:19:17.984054   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:19:17.984129   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:19:17.984175   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:19:17.984197   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:17.984216   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:19:17.984276   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:19:17.987517   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:17.987891   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:19:17.987919   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:17.988138   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:19:17.988361   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:19:17.988505   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:19:17.988670   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:19:18.060182   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1004 03:19:18.065324   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1004 03:19:18.078017   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1004 03:19:18.082669   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1004 03:19:18.094668   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1004 03:19:18.099036   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1004 03:19:18.110596   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1004 03:19:18.115397   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1004 03:19:18.126291   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1004 03:19:18.131864   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1004 03:19:18.143496   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1004 03:19:18.147678   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1004 03:19:18.158714   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:19:18.185425   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:19:18.212989   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:19:18.238721   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:19:18.265688   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1004 03:19:18.292564   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 03:19:18.318046   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:19:18.343621   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:19:18.367533   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:19:18.391460   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:19:18.414533   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:19:18.437881   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1004 03:19:18.454162   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1004 03:19:18.470435   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1004 03:19:18.487697   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1004 03:19:18.504422   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1004 03:19:18.521609   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1004 03:19:18.538712   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1004 03:19:18.555759   30630 ssh_runner.go:195] Run: openssl version
	I1004 03:19:18.561485   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:19:18.572838   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:19:18.578085   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:19:18.578150   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:19:18.584699   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:19:18.596515   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:19:18.608107   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:18.613090   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:18.613151   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:19:18.619060   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:19:18.630222   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:19:18.642211   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:19:18.646675   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:19:18.646733   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:19:18.652690   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:19:18.663892   30630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:19:18.668101   30630 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 03:19:18.668177   30630 kubeadm.go:934] updating node {m02 192.168.39.117 8443 v1.31.1 crio true true} ...
	I1004 03:19:18.668262   30630 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-994751-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:19:18.668287   30630 kube-vip.go:115] generating kube-vip config ...
	I1004 03:19:18.668368   30630 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1004 03:19:18.686599   30630 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1004 03:19:18.686662   30630 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:19:18.686715   30630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:19:18.697844   30630 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1004 03:19:18.697908   30630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1004 03:19:18.708942   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1004 03:19:18.708972   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:19:18.708991   30630 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1004 03:19:18.709028   30630 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1004 03:19:18.709031   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:19:18.713612   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1004 03:19:18.713636   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1004 03:19:19.809158   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:19:19.826203   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:19:19.826314   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:19:19.830837   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1004 03:19:19.830871   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1004 03:19:19.978327   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:19:19.978413   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:19:19.988543   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1004 03:19:19.988589   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1004 03:19:20.364768   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1004 03:19:20.374518   30630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1004 03:19:20.391501   30630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:19:20.408160   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1004 03:19:20.424511   30630 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:19:20.428280   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:19:20.439917   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:19:20.559800   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:19:20.576330   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:19:20.576654   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:19:20.576692   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:19:20.592425   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I1004 03:19:20.593014   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:19:20.593564   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:19:20.593590   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:19:20.593896   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:19:20.594067   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:19:20.594173   30630 start.go:317] joinCluster: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:19:20.594288   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1004 03:19:20.594307   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:19:20.597288   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:20.597706   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:19:20.597738   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:19:20.597851   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:19:20.598146   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:19:20.598359   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:19:20.598601   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:19:20.751261   30630 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:19:20.751313   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tfpvu2.gfmxns87jp8m6lea --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m02 --control-plane --apiserver-advertise-address=192.168.39.117 --apiserver-bind-port=8443"
	I1004 03:19:42.477327   30630 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tfpvu2.gfmxns87jp8m6lea --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m02 --control-plane --apiserver-advertise-address=192.168.39.117 --apiserver-bind-port=8443": (21.725989536s)
	I1004 03:19:42.477374   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1004 03:19:43.011388   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-994751-m02 minikube.k8s.io/updated_at=2024_10_04T03_19_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-994751 minikube.k8s.io/primary=false
	I1004 03:19:43.128289   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-994751-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1004 03:19:43.240778   30630 start.go:319] duration metric: took 22.646600164s to joinCluster
	I1004 03:19:43.240848   30630 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:19:43.241147   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:19:43.242449   30630 out.go:177] * Verifying Kubernetes components...
	I1004 03:19:43.243651   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:19:43.505854   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:19:43.526989   30630 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:19:43.527348   30630 kapi.go:59] client config for ha-994751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key", CAFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1004 03:19:43.527435   30630 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.65:8443
	I1004 03:19:43.527706   30630 node_ready.go:35] waiting up to 6m0s for node "ha-994751-m02" to be "Ready" ...
	I1004 03:19:43.527836   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:43.527848   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:43.527859   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:43.527864   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:43.538086   30630 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1004 03:19:44.028570   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:44.028592   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:44.028599   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:44.028604   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:44.034683   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:19:44.528680   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:44.528707   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:44.528719   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:44.528727   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:44.532210   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:45.028095   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:45.028116   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:45.028124   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:45.028128   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:45.031650   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:45.528659   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:45.528681   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:45.528689   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:45.528693   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:45.532032   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:45.532726   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:46.028184   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:46.028208   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:46.028220   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:46.028224   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:46.031876   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:46.528850   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:46.528870   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:46.528878   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:46.528883   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:46.532535   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:47.028593   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:47.028614   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:47.028622   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:47.028625   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:47.032488   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:47.528380   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:47.528406   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:47.528417   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:47.528423   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:47.532834   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:47.533292   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:48.028846   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:48.028866   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:48.028876   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:48.028879   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:48.033387   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:48.527941   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:48.527965   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:48.527976   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:48.527982   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:48.531255   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:49.027941   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:49.027974   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:49.027982   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:49.027985   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:49.032078   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:49.527942   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:49.527977   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:49.527988   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:49.527993   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:49.531336   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:50.027938   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:50.027962   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:50.027970   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:50.027975   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:50.031574   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:50.032261   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:50.528731   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:50.528756   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:50.528762   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:50.528766   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:50.533072   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:19:51.028280   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:51.028305   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:51.028315   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:51.028318   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:51.031958   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:51.527942   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:51.527963   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:51.527971   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:51.527975   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:51.531671   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:52.028715   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:52.028739   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:52.028747   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:52.028752   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:52.032273   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:52.032782   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:52.528521   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:52.528543   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:52.528553   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:52.528556   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:52.532328   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:53.028497   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:53.028519   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:53.028533   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:53.028536   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:53.031845   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:53.527963   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:53.527986   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:53.527995   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:53.527999   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:53.531468   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:54.028502   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:54.028524   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:54.028533   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:54.028537   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:54.032380   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:54.032974   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:54.528253   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:54.528276   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:54.528286   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:54.528293   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:54.531649   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:55.028786   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:55.028804   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:55.028812   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:55.028817   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:55.032371   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:55.527931   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:55.527953   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:55.527961   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:55.527965   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:55.531477   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:56.028492   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:56.028512   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:56.028519   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:56.028524   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:56.031319   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:19:56.527963   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:56.527981   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:56.527990   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:56.527993   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:56.531347   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:56.531854   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:57.027943   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:57.027962   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:57.027970   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:57.027979   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:57.031176   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:57.527972   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:57.527995   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:57.528006   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:57.528011   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:57.531355   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:58.028084   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:58.028103   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:58.028111   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:58.028115   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:58.034080   30630 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 03:19:58.527939   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:58.527959   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:58.527967   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:58.527972   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:58.530892   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:19:59.027908   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:59.027929   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:59.027938   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:59.027943   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:59.031093   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:19:59.031750   30630 node_ready.go:53] node "ha-994751-m02" has status "Ready":"False"
	I1004 03:19:59.528117   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:19:59.528140   30630 round_trippers.go:469] Request Headers:
	I1004 03:19:59.528148   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:19:59.528152   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:19:59.531338   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.027934   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:00.027956   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.027964   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.027968   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.031243   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.527969   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:00.527990   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.527998   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.528002   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.535322   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:20:00.536101   30630 node_ready.go:49] node "ha-994751-m02" has status "Ready":"True"
	I1004 03:20:00.536141   30630 node_ready.go:38] duration metric: took 17.008396711s for node "ha-994751-m02" to be "Ready" ...
	I1004 03:20:00.536154   30630 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:20:00.536255   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:00.536269   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.536281   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.536287   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.550231   30630 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1004 03:20:00.558943   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.559041   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-l6zst
	I1004 03:20:00.559052   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.559063   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.559071   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.562462   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.563534   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.563551   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.563558   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.563562   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.566458   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.567373   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.567390   30630 pod_ready.go:82] duration metric: took 8.418573ms for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.567399   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.567443   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgdck
	I1004 03:20:00.567450   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.567457   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.567461   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.571010   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.572015   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.572028   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.572035   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.572040   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.574144   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.574637   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.574653   30630 pod_ready.go:82] duration metric: took 7.248385ms for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.574660   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.574701   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751
	I1004 03:20:00.574708   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.574714   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.574718   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.577426   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.578237   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.578256   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.578262   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.578268   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.581297   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.582104   30630 pod_ready.go:93] pod "etcd-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.582124   30630 pod_ready.go:82] duration metric: took 7.457921ms for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.582136   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.582194   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m02
	I1004 03:20:00.582206   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.582213   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.582218   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.584954   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.586074   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:00.586089   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.586096   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.586098   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.588315   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:20:00.588797   30630 pod_ready.go:93] pod "etcd-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.588819   30630 pod_ready.go:82] duration metric: took 6.675728ms for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.588836   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.728447   30630 request.go:632] Waited for 139.544334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:20:00.728509   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:20:00.728514   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.728522   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.728527   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.732242   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.928492   30630 request.go:632] Waited for 195.478493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.928550   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:00.928556   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:00.928563   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:00.928567   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:00.932014   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:00.932660   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:00.932680   30630 pod_ready.go:82] duration metric: took 343.837498ms for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:00.932690   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.128708   30630 request.go:632] Waited for 195.949159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:20:01.128769   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:20:01.128778   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.128786   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.128790   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.131924   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.328936   30630 request.go:632] Waited for 196.247417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:01.328982   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:01.328986   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.328993   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.328999   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.332116   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.332718   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:01.332735   30630 pod_ready.go:82] duration metric: took 400.039408ms for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.332744   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.528985   30630 request.go:632] Waited for 196.178172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:20:01.529051   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:20:01.529057   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.529064   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.529068   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.532813   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.728751   30630 request.go:632] Waited for 195.374296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:01.728822   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:01.728828   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.728835   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.728838   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.732685   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:01.733267   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:01.733284   30630 pod_ready.go:82] duration metric: took 400.533757ms for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.733292   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:01.928444   30630 request.go:632] Waited for 195.093384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:20:01.928511   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:20:01.928517   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:01.928523   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:01.928531   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:01.931659   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.128724   30630 request.go:632] Waited for 196.347214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.128778   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.128783   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.128789   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.128794   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.132222   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.132803   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:02.132822   30630 pod_ready.go:82] duration metric: took 399.524177ms for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.132832   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.328210   30630 request.go:632] Waited for 195.309099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:20:02.328274   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:20:02.328281   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.328288   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.328293   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.331313   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.528409   30630 request.go:632] Waited for 196.390078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:02.528468   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:02.528474   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.528481   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.528486   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.531912   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.532422   30630 pod_ready.go:93] pod "kube-proxy-f44b9" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:02.532446   30630 pod_ready.go:82] duration metric: took 399.600972ms for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.532455   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.728449   30630 request.go:632] Waited for 195.932314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:20:02.728525   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:20:02.728531   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.728539   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.728547   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.732138   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.928159   30630 request.go:632] Waited for 195.316789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.928222   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:02.928227   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:02.928234   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:02.928238   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:02.931607   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:02.932124   30630 pod_ready.go:93] pod "kube-proxy-ph6cf" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:02.932148   30630 pod_ready.go:82] duration metric: took 399.687611ms for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:02.932157   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.128514   30630 request.go:632] Waited for 196.295312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:20:03.128566   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:20:03.128571   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.128579   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.128585   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.131954   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.328958   30630 request.go:632] Waited for 196.406685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:03.329017   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:20:03.329023   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.329031   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.329039   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.332357   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.332971   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:03.332988   30630 pod_ready.go:82] duration metric: took 400.824355ms for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.332997   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.528105   30630 request.go:632] Waited for 195.029512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:20:03.528157   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:20:03.528162   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.528169   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.528173   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.531733   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.727947   30630 request.go:632] Waited for 195.304105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:03.728022   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:20:03.728029   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.728038   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.728046   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.731222   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:03.731799   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:20:03.731823   30630 pod_ready.go:82] duration metric: took 398.818433ms for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:20:03.731836   30630 pod_ready.go:39] duration metric: took 3.195663558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:20:03.731854   30630 api_server.go:52] waiting for apiserver process to appear ...
	I1004 03:20:03.731914   30630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:20:03.748156   30630 api_server.go:72] duration metric: took 20.507274316s to wait for apiserver process to appear ...
	I1004 03:20:03.748186   30630 api_server.go:88] waiting for apiserver healthz status ...
	I1004 03:20:03.748208   30630 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I1004 03:20:03.752562   30630 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I1004 03:20:03.752615   30630 round_trippers.go:463] GET https://192.168.39.65:8443/version
	I1004 03:20:03.752620   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.752627   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.752633   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.753368   30630 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1004 03:20:03.753569   30630 api_server.go:141] control plane version: v1.31.1
	I1004 03:20:03.753592   30630 api_server.go:131] duration metric: took 5.397003ms to wait for apiserver health ...
	I1004 03:20:03.753601   30630 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 03:20:03.928947   30630 request.go:632] Waited for 175.282043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:03.929032   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:03.929040   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:03.929049   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:03.929055   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:03.934063   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:20:03.938318   30630 system_pods.go:59] 17 kube-system pods found
	I1004 03:20:03.938350   30630 system_pods.go:61] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:20:03.938358   30630 system_pods.go:61] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:20:03.938363   30630 system_pods.go:61] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:20:03.938369   30630 system_pods.go:61] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:20:03.938373   30630 system_pods.go:61] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:20:03.938378   30630 system_pods.go:61] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:20:03.938383   30630 system_pods.go:61] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:20:03.938387   30630 system_pods.go:61] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:20:03.938392   30630 system_pods.go:61] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:20:03.938397   30630 system_pods.go:61] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:20:03.938402   30630 system_pods.go:61] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:20:03.938408   30630 system_pods.go:61] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:20:03.938416   30630 system_pods.go:61] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:20:03.938422   30630 system_pods.go:61] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:20:03.938430   30630 system_pods.go:61] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:20:03.938435   30630 system_pods.go:61] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:20:03.938440   30630 system_pods.go:61] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:20:03.938450   30630 system_pods.go:74] duration metric: took 184.842668ms to wait for pod list to return data ...
	I1004 03:20:03.938469   30630 default_sa.go:34] waiting for default service account to be created ...
	I1004 03:20:04.128894   30630 request.go:632] Waited for 190.327691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:20:04.128944   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:20:04.128949   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:04.128956   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:04.128960   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:04.132905   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:04.133105   30630 default_sa.go:45] found service account: "default"
	I1004 03:20:04.133122   30630 default_sa.go:55] duration metric: took 194.645917ms for default service account to be created ...
	I1004 03:20:04.133132   30630 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 03:20:04.328598   30630 request.go:632] Waited for 195.393579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:04.328702   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:20:04.328730   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:04.328744   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:04.328753   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:04.333188   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:20:04.337805   30630 system_pods.go:86] 17 kube-system pods found
	I1004 03:20:04.337832   30630 system_pods.go:89] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:20:04.337838   30630 system_pods.go:89] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:20:04.337842   30630 system_pods.go:89] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:20:04.337848   30630 system_pods.go:89] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:20:04.337851   30630 system_pods.go:89] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:20:04.337855   30630 system_pods.go:89] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:20:04.337859   30630 system_pods.go:89] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:20:04.337863   30630 system_pods.go:89] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:20:04.337867   30630 system_pods.go:89] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:20:04.337874   30630 system_pods.go:89] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:20:04.337878   30630 system_pods.go:89] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:20:04.337885   30630 system_pods.go:89] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:20:04.337889   30630 system_pods.go:89] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:20:04.337901   30630 system_pods.go:89] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:20:04.337904   30630 system_pods.go:89] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:20:04.337907   30630 system_pods.go:89] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:20:04.337912   30630 system_pods.go:89] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:20:04.337921   30630 system_pods.go:126] duration metric: took 204.78361ms to wait for k8s-apps to be running ...
	I1004 03:20:04.337930   30630 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 03:20:04.337975   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:20:04.352705   30630 system_svc.go:56] duration metric: took 14.766178ms WaitForService to wait for kubelet
	I1004 03:20:04.352728   30630 kubeadm.go:582] duration metric: took 21.111850874s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:20:04.352744   30630 node_conditions.go:102] verifying NodePressure condition ...
	I1004 03:20:04.528049   30630 request.go:632] Waited for 175.240806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes
	I1004 03:20:04.528140   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes
	I1004 03:20:04.528148   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:04.528158   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:04.528166   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:04.532040   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:04.532645   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:20:04.532668   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:20:04.532682   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:20:04.532689   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:20:04.532696   30630 node_conditions.go:105] duration metric: took 179.947049ms to run NodePressure ...
	I1004 03:20:04.532711   30630 start.go:241] waiting for startup goroutines ...
	I1004 03:20:04.532748   30630 start.go:255] writing updated cluster config ...
	I1004 03:20:04.534798   30630 out.go:201] 
	I1004 03:20:04.536250   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:04.536346   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:20:04.537713   30630 out.go:177] * Starting "ha-994751-m03" control-plane node in "ha-994751" cluster
	I1004 03:20:04.538772   30630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:20:04.538791   30630 cache.go:56] Caching tarball of preloaded images
	I1004 03:20:04.538881   30630 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:20:04.538892   30630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:20:04.538970   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:20:04.539124   30630 start.go:360] acquireMachinesLock for ha-994751-m03: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:20:04.539179   30630 start.go:364] duration metric: took 32.772µs to acquireMachinesLock for "ha-994751-m03"
	I1004 03:20:04.539202   30630 start.go:93] Provisioning new machine with config: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:20:04.539327   30630 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1004 03:20:04.540776   30630 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 03:20:04.540857   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:04.540889   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:04.555427   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	I1004 03:20:04.555831   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:04.556364   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:04.556394   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:04.556738   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:04.556921   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:04.557038   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:04.557175   30630 start.go:159] libmachine.API.Create for "ha-994751" (driver="kvm2")
	I1004 03:20:04.557204   30630 client.go:168] LocalClient.Create starting
	I1004 03:20:04.557233   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 03:20:04.557271   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:20:04.557291   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:20:04.557375   30630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 03:20:04.557421   30630 main.go:141] libmachine: Decoding PEM data...
	I1004 03:20:04.557449   30630 main.go:141] libmachine: Parsing certificate...
	I1004 03:20:04.557481   30630 main.go:141] libmachine: Running pre-create checks...
	I1004 03:20:04.557495   30630 main.go:141] libmachine: (ha-994751-m03) Calling .PreCreateCheck
	I1004 03:20:04.557705   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetConfigRaw
	I1004 03:20:04.558081   30630 main.go:141] libmachine: Creating machine...
	I1004 03:20:04.558096   30630 main.go:141] libmachine: (ha-994751-m03) Calling .Create
	I1004 03:20:04.558257   30630 main.go:141] libmachine: (ha-994751-m03) Creating KVM machine...
	I1004 03:20:04.559668   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found existing default KVM network
	I1004 03:20:04.559869   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found existing private KVM network mk-ha-994751
	I1004 03:20:04.560039   30630 main.go:141] libmachine: (ha-994751-m03) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03 ...
	I1004 03:20:04.560065   30630 main.go:141] libmachine: (ha-994751-m03) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 03:20:04.560110   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:04.560016   31400 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:20:04.560192   30630 main.go:141] libmachine: (ha-994751-m03) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 03:20:04.808276   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:04.808145   31400 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa...
	I1004 03:20:05.005812   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:05.005703   31400 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/ha-994751-m03.rawdisk...
	I1004 03:20:05.005838   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Writing magic tar header
	I1004 03:20:05.005848   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Writing SSH key tar header
	I1004 03:20:05.005856   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:05.005807   31400 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03 ...
	I1004 03:20:05.005932   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03
	I1004 03:20:05.005971   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 03:20:05.006001   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03 (perms=drwx------)
	I1004 03:20:05.006011   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:20:05.006021   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 03:20:05.006034   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 03:20:05.006047   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 03:20:05.006063   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 03:20:05.006075   30630 main.go:141] libmachine: (ha-994751-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 03:20:05.006086   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 03:20:05.006100   30630 main.go:141] libmachine: (ha-994751-m03) Creating domain...
	I1004 03:20:05.006109   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 03:20:05.006122   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home/jenkins
	I1004 03:20:05.006135   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Checking permissions on dir: /home
	I1004 03:20:05.006147   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Skipping /home - not owner
	I1004 03:20:05.007092   30630 main.go:141] libmachine: (ha-994751-m03) define libvirt domain using xml: 
	I1004 03:20:05.007116   30630 main.go:141] libmachine: (ha-994751-m03) <domain type='kvm'>
	I1004 03:20:05.007126   30630 main.go:141] libmachine: (ha-994751-m03)   <name>ha-994751-m03</name>
	I1004 03:20:05.007139   30630 main.go:141] libmachine: (ha-994751-m03)   <memory unit='MiB'>2200</memory>
	I1004 03:20:05.007151   30630 main.go:141] libmachine: (ha-994751-m03)   <vcpu>2</vcpu>
	I1004 03:20:05.007158   30630 main.go:141] libmachine: (ha-994751-m03)   <features>
	I1004 03:20:05.007166   30630 main.go:141] libmachine: (ha-994751-m03)     <acpi/>
	I1004 03:20:05.007173   30630 main.go:141] libmachine: (ha-994751-m03)     <apic/>
	I1004 03:20:05.007177   30630 main.go:141] libmachine: (ha-994751-m03)     <pae/>
	I1004 03:20:05.007183   30630 main.go:141] libmachine: (ha-994751-m03)     
	I1004 03:20:05.007189   30630 main.go:141] libmachine: (ha-994751-m03)   </features>
	I1004 03:20:05.007198   30630 main.go:141] libmachine: (ha-994751-m03)   <cpu mode='host-passthrough'>
	I1004 03:20:05.007205   30630 main.go:141] libmachine: (ha-994751-m03)   
	I1004 03:20:05.007209   30630 main.go:141] libmachine: (ha-994751-m03)   </cpu>
	I1004 03:20:05.007231   30630 main.go:141] libmachine: (ha-994751-m03)   <os>
	I1004 03:20:05.007247   30630 main.go:141] libmachine: (ha-994751-m03)     <type>hvm</type>
	I1004 03:20:05.007256   30630 main.go:141] libmachine: (ha-994751-m03)     <boot dev='cdrom'/>
	I1004 03:20:05.007270   30630 main.go:141] libmachine: (ha-994751-m03)     <boot dev='hd'/>
	I1004 03:20:05.007282   30630 main.go:141] libmachine: (ha-994751-m03)     <bootmenu enable='no'/>
	I1004 03:20:05.007301   30630 main.go:141] libmachine: (ha-994751-m03)   </os>
	I1004 03:20:05.007312   30630 main.go:141] libmachine: (ha-994751-m03)   <devices>
	I1004 03:20:05.007323   30630 main.go:141] libmachine: (ha-994751-m03)     <disk type='file' device='cdrom'>
	I1004 03:20:05.007339   30630 main.go:141] libmachine: (ha-994751-m03)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/boot2docker.iso'/>
	I1004 03:20:05.007353   30630 main.go:141] libmachine: (ha-994751-m03)       <target dev='hdc' bus='scsi'/>
	I1004 03:20:05.007365   30630 main.go:141] libmachine: (ha-994751-m03)       <readonly/>
	I1004 03:20:05.007373   30630 main.go:141] libmachine: (ha-994751-m03)     </disk>
	I1004 03:20:05.007385   30630 main.go:141] libmachine: (ha-994751-m03)     <disk type='file' device='disk'>
	I1004 03:20:05.007397   30630 main.go:141] libmachine: (ha-994751-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 03:20:05.007412   30630 main.go:141] libmachine: (ha-994751-m03)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/ha-994751-m03.rawdisk'/>
	I1004 03:20:05.007427   30630 main.go:141] libmachine: (ha-994751-m03)       <target dev='hda' bus='virtio'/>
	I1004 03:20:05.007439   30630 main.go:141] libmachine: (ha-994751-m03)     </disk>
	I1004 03:20:05.007448   30630 main.go:141] libmachine: (ha-994751-m03)     <interface type='network'>
	I1004 03:20:05.007465   30630 main.go:141] libmachine: (ha-994751-m03)       <source network='mk-ha-994751'/>
	I1004 03:20:05.007474   30630 main.go:141] libmachine: (ha-994751-m03)       <model type='virtio'/>
	I1004 03:20:05.007484   30630 main.go:141] libmachine: (ha-994751-m03)     </interface>
	I1004 03:20:05.007498   30630 main.go:141] libmachine: (ha-994751-m03)     <interface type='network'>
	I1004 03:20:05.007510   30630 main.go:141] libmachine: (ha-994751-m03)       <source network='default'/>
	I1004 03:20:05.007520   30630 main.go:141] libmachine: (ha-994751-m03)       <model type='virtio'/>
	I1004 03:20:05.007530   30630 main.go:141] libmachine: (ha-994751-m03)     </interface>
	I1004 03:20:05.007540   30630 main.go:141] libmachine: (ha-994751-m03)     <serial type='pty'>
	I1004 03:20:05.007550   30630 main.go:141] libmachine: (ha-994751-m03)       <target port='0'/>
	I1004 03:20:05.007559   30630 main.go:141] libmachine: (ha-994751-m03)     </serial>
	I1004 03:20:05.007576   30630 main.go:141] libmachine: (ha-994751-m03)     <console type='pty'>
	I1004 03:20:05.007591   30630 main.go:141] libmachine: (ha-994751-m03)       <target type='serial' port='0'/>
	I1004 03:20:05.007600   30630 main.go:141] libmachine: (ha-994751-m03)     </console>
	I1004 03:20:05.007608   30630 main.go:141] libmachine: (ha-994751-m03)     <rng model='virtio'>
	I1004 03:20:05.007614   30630 main.go:141] libmachine: (ha-994751-m03)       <backend model='random'>/dev/random</backend>
	I1004 03:20:05.007620   30630 main.go:141] libmachine: (ha-994751-m03)     </rng>
	I1004 03:20:05.007628   30630 main.go:141] libmachine: (ha-994751-m03)     
	I1004 03:20:05.007636   30630 main.go:141] libmachine: (ha-994751-m03)     
	I1004 03:20:05.007652   30630 main.go:141] libmachine: (ha-994751-m03)   </devices>
	I1004 03:20:05.007666   30630 main.go:141] libmachine: (ha-994751-m03) </domain>
	I1004 03:20:05.007678   30630 main.go:141] libmachine: (ha-994751-m03) 
	I1004 03:20:05.014475   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:d0:97:18 in network default
	I1004 03:20:05.015005   30630 main.go:141] libmachine: (ha-994751-m03) Ensuring networks are active...
	I1004 03:20:05.015041   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:05.015645   30630 main.go:141] libmachine: (ha-994751-m03) Ensuring network default is active
	I1004 03:20:05.015928   30630 main.go:141] libmachine: (ha-994751-m03) Ensuring network mk-ha-994751 is active
	I1004 03:20:05.016249   30630 main.go:141] libmachine: (ha-994751-m03) Getting domain xml...
	I1004 03:20:05.016929   30630 main.go:141] libmachine: (ha-994751-m03) Creating domain...
	I1004 03:20:06.261440   30630 main.go:141] libmachine: (ha-994751-m03) Waiting to get IP...
	I1004 03:20:06.262071   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:06.262414   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:06.262472   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:06.262421   31400 retry.go:31] will retry after 250.348601ms: waiting for machine to come up
	I1004 03:20:06.515070   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:06.515535   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:06.515565   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:06.515468   31400 retry.go:31] will retry after 243.422578ms: waiting for machine to come up
	I1004 03:20:06.760919   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:06.761413   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:06.761440   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:06.761366   31400 retry.go:31] will retry after 323.138496ms: waiting for machine to come up
	I1004 03:20:07.085754   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:07.086220   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:07.086254   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:07.086174   31400 retry.go:31] will retry after 589.608599ms: waiting for machine to come up
	I1004 03:20:07.676793   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:07.677255   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:07.677277   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:07.677220   31400 retry.go:31] will retry after 686.955192ms: waiting for machine to come up
	I1004 03:20:08.365977   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:08.366366   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:08.366390   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:08.366322   31400 retry.go:31] will retry after 861.927469ms: waiting for machine to come up
	I1004 03:20:09.229974   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:09.230402   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:09.230431   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:09.230354   31400 retry.go:31] will retry after 766.03024ms: waiting for machine to come up
	I1004 03:20:09.997533   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:09.997938   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:09.997963   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:09.997907   31400 retry.go:31] will retry after 980.127757ms: waiting for machine to come up
	I1004 03:20:10.979306   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:10.979718   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:10.979743   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:10.979684   31400 retry.go:31] will retry after 1.544904084s: waiting for machine to come up
	I1004 03:20:12.525854   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:12.526225   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:12.526249   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:12.526177   31400 retry.go:31] will retry after 1.432028005s: waiting for machine to come up
	I1004 03:20:13.960907   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:13.961388   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:13.961415   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:13.961367   31400 retry.go:31] will retry after 1.927604807s: waiting for machine to come up
	I1004 03:20:15.890697   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:15.891148   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:15.891175   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:15.891091   31400 retry.go:31] will retry after 3.506356031s: waiting for machine to come up
	I1004 03:20:19.400810   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:19.401322   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:19.401349   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:19.401272   31400 retry.go:31] will retry after 3.367410839s: waiting for machine to come up
	I1004 03:20:22.769867   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:22.770373   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find current IP address of domain ha-994751-m03 in network mk-ha-994751
	I1004 03:20:22.770407   30630 main.go:141] libmachine: (ha-994751-m03) DBG | I1004 03:20:22.770302   31400 retry.go:31] will retry after 5.266869096s: waiting for machine to come up
	I1004 03:20:28.041532   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:28.041995   30630 main.go:141] libmachine: (ha-994751-m03) Found IP for machine: 192.168.39.53
	I1004 03:20:28.042014   30630 main.go:141] libmachine: (ha-994751-m03) Reserving static IP address...
	I1004 03:20:28.042026   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has current primary IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:28.042375   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find host DHCP lease matching {name: "ha-994751-m03", mac: "52:54:00:49:76:ea", ip: "192.168.39.53"} in network mk-ha-994751
	I1004 03:20:28.115076   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Getting to WaitForSSH function...
	I1004 03:20:28.115105   30630 main.go:141] libmachine: (ha-994751-m03) Reserved static IP address: 192.168.39.53
	I1004 03:20:28.115145   30630 main.go:141] libmachine: (ha-994751-m03) Waiting for SSH to be available...
	I1004 03:20:28.117390   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:28.117662   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751
	I1004 03:20:28.117678   30630 main.go:141] libmachine: (ha-994751-m03) DBG | unable to find defined IP address of network mk-ha-994751 interface with MAC address 52:54:00:49:76:ea
	I1004 03:20:28.117841   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH client type: external
	I1004 03:20:28.117866   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa (-rw-------)
	I1004 03:20:28.117909   30630 main.go:141] libmachine: (ha-994751-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:20:28.117924   30630 main.go:141] libmachine: (ha-994751-m03) DBG | About to run SSH command:
	I1004 03:20:28.117940   30630 main.go:141] libmachine: (ha-994751-m03) DBG | exit 0
	I1004 03:20:28.121632   30630 main.go:141] libmachine: (ha-994751-m03) DBG | SSH cmd err, output: exit status 255: 
	I1004 03:20:28.121657   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1004 03:20:28.121668   30630 main.go:141] libmachine: (ha-994751-m03) DBG | command : exit 0
	I1004 03:20:28.121677   30630 main.go:141] libmachine: (ha-994751-m03) DBG | err     : exit status 255
	I1004 03:20:28.121690   30630 main.go:141] libmachine: (ha-994751-m03) DBG | output  : 
	I1004 03:20:31.123157   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Getting to WaitForSSH function...
	I1004 03:20:31.125515   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.125954   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.125981   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.126121   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH client type: external
	I1004 03:20:31.126148   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa (-rw-------)
	I1004 03:20:31.126175   30630 main.go:141] libmachine: (ha-994751-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 03:20:31.126186   30630 main.go:141] libmachine: (ha-994751-m03) DBG | About to run SSH command:
	I1004 03:20:31.126199   30630 main.go:141] libmachine: (ha-994751-m03) DBG | exit 0
	I1004 03:20:31.255788   30630 main.go:141] libmachine: (ha-994751-m03) DBG | SSH cmd err, output: <nil>: 
	I1004 03:20:31.256048   30630 main.go:141] libmachine: (ha-994751-m03) KVM machine creation complete!
	I1004 03:20:31.256416   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetConfigRaw
	I1004 03:20:31.257001   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:31.257196   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:31.257537   30630 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 03:20:31.257552   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetState
	I1004 03:20:31.258954   30630 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 03:20:31.258966   30630 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 03:20:31.258972   30630 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 03:20:31.258978   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.261065   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.261407   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.261432   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.261523   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.261696   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.261827   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.261939   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.262104   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.262338   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.262354   30630 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 03:20:31.371392   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:20:31.371421   30630 main.go:141] libmachine: Detecting the provisioner...
	I1004 03:20:31.371431   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.374360   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.374677   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.374703   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.374874   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.375093   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.375299   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.375463   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.375637   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.375858   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.375873   30630 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 03:20:31.489043   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 03:20:31.489093   30630 main.go:141] libmachine: found compatible host: buildroot
	I1004 03:20:31.489100   30630 main.go:141] libmachine: Provisioning with buildroot...
	I1004 03:20:31.489107   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:31.489333   30630 buildroot.go:166] provisioning hostname "ha-994751-m03"
	I1004 03:20:31.489357   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:31.489534   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.492101   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.492553   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.492573   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.492727   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.492907   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.493039   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.493147   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.493277   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.493442   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.493453   30630 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-994751-m03 && echo "ha-994751-m03" | sudo tee /etc/hostname
	I1004 03:20:31.626029   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751-m03
	
	I1004 03:20:31.626058   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.628598   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.629032   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.629055   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.629247   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.629454   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.629599   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.629757   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.629901   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.630075   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.630108   30630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-994751-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-994751-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-994751-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:20:31.754855   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:20:31.754886   30630 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:20:31.754923   30630 buildroot.go:174] setting up certificates
	I1004 03:20:31.754934   30630 provision.go:84] configureAuth start
	I1004 03:20:31.754946   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetMachineName
	I1004 03:20:31.755194   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:31.757747   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.758065   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.758087   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.758193   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.760414   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.760746   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.760771   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.760844   30630 provision.go:143] copyHostCerts
	I1004 03:20:31.760875   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:20:31.760907   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:20:31.760915   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:20:31.760984   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:20:31.761064   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:20:31.761082   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:20:31.761088   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:20:31.761114   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:20:31.761166   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:20:31.761182   30630 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:20:31.761188   30630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:20:31.761214   30630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:20:31.761271   30630 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.ha-994751-m03 san=[127.0.0.1 192.168.39.53 ha-994751-m03 localhost minikube]
	I1004 03:20:31.828214   30630 provision.go:177] copyRemoteCerts
	I1004 03:20:31.828263   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:20:31.828283   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.830707   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.831047   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.831078   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.831192   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.831375   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.831522   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.831636   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:31.917792   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:20:31.917859   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:20:31.943534   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:20:31.943606   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 03:20:31.968990   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:20:31.969060   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:20:31.992331   30630 provision.go:87] duration metric: took 237.384107ms to configureAuth
	I1004 03:20:31.992362   30630 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:20:31.992622   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:31.992738   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:31.995570   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.995946   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:31.995975   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:31.996126   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:31.996306   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.996434   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:31.996569   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:31.996677   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:31.996863   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:31.996880   30630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:20:32.229026   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:20:32.229061   30630 main.go:141] libmachine: Checking connection to Docker...
	I1004 03:20:32.229071   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetURL
	I1004 03:20:32.230237   30630 main.go:141] libmachine: (ha-994751-m03) DBG | Using libvirt version 6000000
	I1004 03:20:32.232533   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.232839   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.232870   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.233012   30630 main.go:141] libmachine: Docker is up and running!
	I1004 03:20:32.233029   30630 main.go:141] libmachine: Reticulating splines...
	I1004 03:20:32.233037   30630 client.go:171] duration metric: took 27.675822366s to LocalClient.Create
	I1004 03:20:32.233061   30630 start.go:167] duration metric: took 27.675885367s to libmachine.API.Create "ha-994751"
	I1004 03:20:32.233071   30630 start.go:293] postStartSetup for "ha-994751-m03" (driver="kvm2")
	I1004 03:20:32.233080   30630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:20:32.233096   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.233315   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:20:32.233341   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:32.235889   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.236270   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.236297   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.236452   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.236641   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.236787   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.236936   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:32.321827   30630 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:20:32.326129   30630 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:20:32.326152   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:20:32.326232   30630 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:20:32.326328   30630 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:20:32.326339   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:20:32.326421   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:20:32.336376   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:20:32.359653   30630 start.go:296] duration metric: took 126.571809ms for postStartSetup
	I1004 03:20:32.359721   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetConfigRaw
	I1004 03:20:32.360268   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:32.362856   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.363243   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.363268   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.363469   30630 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:20:32.363663   30630 start.go:128] duration metric: took 27.824325438s to createHost
	I1004 03:20:32.363686   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:32.365882   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.366210   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.366226   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.366350   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.366523   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.366674   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.366824   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.366985   30630 main.go:141] libmachine: Using SSH client type: native
	I1004 03:20:32.367180   30630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1004 03:20:32.367194   30630 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:20:32.480703   30630 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728012032.461011085
	
	I1004 03:20:32.480725   30630 fix.go:216] guest clock: 1728012032.461011085
	I1004 03:20:32.480735   30630 fix.go:229] Guest: 2024-10-04 03:20:32.461011085 +0000 UTC Remote: 2024-10-04 03:20:32.363675 +0000 UTC m=+146.676506004 (delta=97.336085ms)
	I1004 03:20:32.480753   30630 fix.go:200] guest clock delta is within tolerance: 97.336085ms
	I1004 03:20:32.480760   30630 start.go:83] releasing machines lock for "ha-994751-m03", held for 27.941569364s
	I1004 03:20:32.480780   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.480989   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:32.483796   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.484159   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.484191   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.486391   30630 out.go:177] * Found network options:
	I1004 03:20:32.487654   30630 out.go:177]   - NO_PROXY=192.168.39.65,192.168.39.117
	W1004 03:20:32.488913   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	W1004 03:20:32.488946   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:20:32.488964   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.489521   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.489776   30630 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:20:32.489869   30630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:20:32.489906   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	W1004 03:20:32.489985   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	W1004 03:20:32.490009   30630 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 03:20:32.490068   30630 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:20:32.490090   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:20:32.492646   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.492900   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.493125   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.493149   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.493245   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:32.493267   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:32.493334   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.493500   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.493556   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:20:32.493707   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.493736   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:20:32.493920   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:32.493987   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:20:32.494105   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:20:32.742057   30630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 03:20:32.749338   30630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:20:32.749392   30630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:20:32.765055   30630 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 03:20:32.765079   30630 start.go:495] detecting cgroup driver to use...
	I1004 03:20:32.765139   30630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:20:32.780546   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:20:32.797729   30630 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:20:32.797789   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:20:32.810917   30630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:20:32.823880   30630 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:20:32.941749   30630 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:20:33.094803   30630 docker.go:233] disabling docker service ...
	I1004 03:20:33.094875   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:20:33.108945   30630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:20:33.122238   30630 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:20:33.259499   30630 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:20:33.382162   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:20:33.399956   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:20:33.419077   30630 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:20:33.419147   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.431123   30630 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:20:33.431176   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.442393   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.454523   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.465583   30630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:20:33.477059   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.487953   30630 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.505077   30630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:20:33.515522   30630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:20:33.526537   30630 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 03:20:33.526592   30630 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 03:20:33.540307   30630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:20:33.550485   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:20:33.660459   30630 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:20:33.759769   30630 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:20:33.759862   30630 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:20:33.764677   30630 start.go:563] Will wait 60s for crictl version
	I1004 03:20:33.764728   30630 ssh_runner.go:195] Run: which crictl
	I1004 03:20:33.768748   30630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:20:33.815756   30630 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:20:33.815849   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:20:33.843604   30630 ssh_runner.go:195] Run: crio --version
	I1004 03:20:33.875395   30630 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:20:33.876902   30630 out.go:177]   - env NO_PROXY=192.168.39.65
	I1004 03:20:33.878202   30630 out.go:177]   - env NO_PROXY=192.168.39.65,192.168.39.117
	I1004 03:20:33.879354   30630 main.go:141] libmachine: (ha-994751-m03) Calling .GetIP
	I1004 03:20:33.881763   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:33.882075   30630 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:20:33.882116   30630 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:20:33.882282   30630 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:20:33.887016   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:20:33.900617   30630 mustload.go:65] Loading cluster: ha-994751
	I1004 03:20:33.900859   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:33.901101   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:33.901139   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:33.916080   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40599
	I1004 03:20:33.916545   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:33.917019   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:33.917038   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:33.917311   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:33.917490   30630 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:20:33.918758   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:20:33.919091   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:33.919127   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:33.934895   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I1004 03:20:33.935369   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:33.935847   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:33.935870   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:33.936191   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:33.936373   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:20:33.936519   30630 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751 for IP: 192.168.39.53
	I1004 03:20:33.936531   30630 certs.go:194] generating shared ca certs ...
	I1004 03:20:33.936550   30630 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:20:33.936692   30630 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:20:33.936742   30630 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:20:33.936754   30630 certs.go:256] generating profile certs ...
	I1004 03:20:33.936848   30630 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key
	I1004 03:20:33.936877   30630 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21
	I1004 03:20:33.936895   30630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65 192.168.39.117 192.168.39.53 192.168.39.254]
	I1004 03:20:34.019919   30630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21 ...
	I1004 03:20:34.019948   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21: {Name:mk35ee00bf994088c6b50391189f3e324fc0101b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:20:34.020103   30630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21 ...
	I1004 03:20:34.020114   30630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21: {Name:mk408ba3330d2e90d98d309cc86d9e5e670f9570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:20:34.020180   30630 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.c7b5eb21 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt
	I1004 03:20:34.020296   30630 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.c7b5eb21 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key
	I1004 03:20:34.020411   30630 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key
	I1004 03:20:34.020425   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:20:34.020438   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:20:34.020452   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:20:34.020465   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:20:34.020477   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:20:34.020489   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:20:34.020501   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:20:34.035820   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:20:34.035890   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:20:34.035926   30630 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:20:34.035946   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:20:34.035969   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:20:34.035990   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:20:34.036010   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:20:34.036045   30630 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:20:34.036074   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.036087   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.036100   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.036130   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:20:34.039080   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:34.039469   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:20:34.039485   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:34.039662   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:20:34.039893   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:20:34.040036   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:20:34.040151   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:20:34.112207   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1004 03:20:34.117935   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1004 03:20:34.131114   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1004 03:20:34.136170   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1004 03:20:34.149066   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1004 03:20:34.153717   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1004 03:20:34.167750   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1004 03:20:34.172288   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1004 03:20:34.184761   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1004 03:20:34.189707   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1004 03:20:34.201792   30630 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1004 03:20:34.206305   30630 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1004 03:20:34.218091   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:20:34.243235   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:20:34.267642   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:20:34.291741   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:20:34.317056   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1004 03:20:34.340832   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 03:20:34.364951   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:20:34.392565   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:20:34.419461   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:20:34.444597   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:20:34.470026   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:20:34.495443   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1004 03:20:34.513085   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1004 03:20:34.530602   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1004 03:20:34.548064   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1004 03:20:34.565179   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1004 03:20:34.582199   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1004 03:20:34.599469   30630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1004 03:20:34.617008   30630 ssh_runner.go:195] Run: openssl version
	I1004 03:20:34.623238   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:20:34.635851   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.641242   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.641300   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:20:34.647354   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:20:34.660625   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:20:34.673563   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.678872   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.678918   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:20:34.685228   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:20:34.696965   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:20:34.708173   30630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.712666   30630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.712728   30630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:20:34.718347   30630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:20:34.729423   30630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:20:34.733599   30630 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 03:20:34.733645   30630 kubeadm.go:934] updating node {m03 192.168.39.53 8443 v1.31.1 crio true true} ...
	I1004 03:20:34.733734   30630 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-994751-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:20:34.733759   30630 kube-vip.go:115] generating kube-vip config ...
	I1004 03:20:34.733788   30630 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1004 03:20:34.753104   30630 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1004 03:20:34.753160   30630 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:20:34.753207   30630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:20:34.764605   30630 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1004 03:20:34.764653   30630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1004 03:20:34.776026   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1004 03:20:34.776058   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:20:34.776073   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1004 03:20:34.776077   30630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1004 03:20:34.776094   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:20:34.776111   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1004 03:20:34.776123   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:20:34.776154   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1004 03:20:34.784508   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1004 03:20:34.784532   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1004 03:20:34.784546   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1004 03:20:34.784554   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1004 03:20:34.816412   30630 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:20:34.816537   30630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1004 03:20:34.932259   30630 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1004 03:20:34.932304   30630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1004 03:20:35.665849   30630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1004 03:20:35.676114   30630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1004 03:20:35.694028   30630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:20:35.718864   30630 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1004 03:20:35.736291   30630 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:20:35.740907   30630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:20:35.753115   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:20:35.870874   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:20:35.888175   30630 host.go:66] Checking if "ha-994751" exists ...
	I1004 03:20:35.888614   30630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:20:35.888675   30630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:20:35.903712   30630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33367
	I1004 03:20:35.904202   30630 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:20:35.904676   30630 main.go:141] libmachine: Using API Version  1
	I1004 03:20:35.904700   30630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:20:35.904994   30630 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:20:35.905194   30630 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:20:35.905357   30630 start.go:317] joinCluster: &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:20:35.905474   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1004 03:20:35.905495   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:20:35.908275   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:35.908713   30630 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:20:35.908739   30630 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:20:35.908875   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:20:35.909047   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:20:35.909173   30630 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:20:35.909303   30630 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:20:36.083592   30630 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:20:36.083645   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e5abq7.epvk18yjfmjj0i7x --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I1004 03:20:57.688048   30630 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e5abq7.epvk18yjfmjj0i7x --discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-994751-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (21.604380186s)
	I1004 03:20:57.688081   30630 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1004 03:20:58.272843   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-994751-m03 minikube.k8s.io/updated_at=2024_10_04T03_20_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=ha-994751 minikube.k8s.io/primary=false
	I1004 03:20:58.405355   30630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-994751-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1004 03:20:58.529681   30630 start.go:319] duration metric: took 22.624319783s to joinCluster
	I1004 03:20:58.529762   30630 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 03:20:58.530014   30630 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:20:58.531345   30630 out.go:177] * Verifying Kubernetes components...
	I1004 03:20:58.532710   30630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:20:58.800802   30630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:20:58.844203   30630 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:20:58.844571   30630 kapi.go:59] client config for ha-994751: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key", CAFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1004 03:20:58.844645   30630 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.65:8443
	I1004 03:20:58.844892   30630 node_ready.go:35] waiting up to 6m0s for node "ha-994751-m03" to be "Ready" ...
	I1004 03:20:58.844972   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:20:58.844982   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:58.844998   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:58.845007   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:58.848088   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:20:59.345094   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:20:59.345120   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:59.345130   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:59.345135   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:59.353141   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:20:59.845733   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:20:59.845805   30630 round_trippers.go:469] Request Headers:
	I1004 03:20:59.845823   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:20:59.845832   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:20:59.850171   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:00.345129   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:00.345150   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:00.345159   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:00.345163   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:00.348609   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:00.845173   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:00.845196   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:00.845205   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:00.845210   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:00.850207   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:00.851383   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:01.345051   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:01.345072   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:01.345079   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:01.345083   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:01.349207   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:01.845336   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:01.845357   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:01.845364   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:01.845369   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:01.848367   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:02.345495   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:02.345521   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:02.345529   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:02.345534   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:02.349838   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:02.845704   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:02.845732   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:02.845745   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:02.845752   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:02.849074   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:03.345450   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:03.345472   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:03.345480   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:03.345484   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:03.349082   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:03.349671   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:03.846035   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:03.846061   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:03.846072   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:03.846079   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:03.850455   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:04.345156   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:04.345183   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:04.345191   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:04.345196   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:04.349346   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:04.845676   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:04.845695   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:04.845702   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:04.845707   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:04.849977   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:05.345993   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:05.346019   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:05.346028   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:05.346032   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:05.350487   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:05.352077   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:05.845454   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:05.845473   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:05.845486   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:05.845493   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:05.848902   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:06.345394   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:06.345416   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:06.345424   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:06.345428   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:06.348963   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:06.846045   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:06.846066   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:06.846077   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:06.846084   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:06.849291   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:07.345224   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:07.345249   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:07.345258   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:07.345261   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:07.348950   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:07.845773   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:07.845797   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:07.845807   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:07.845812   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:07.853790   30630 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 03:21:07.854460   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:08.345396   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:08.345417   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:08.345425   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:08.345430   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:08.348967   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:08.845960   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:08.845987   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:08.845998   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:08.846004   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:08.849592   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:09.345163   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:09.345187   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:09.345195   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:09.345199   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:09.348412   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:09.845700   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:09.845720   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:09.845727   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:09.845732   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:09.848850   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:10.346002   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:10.346024   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:10.346036   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:10.346041   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:10.349778   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:10.350421   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:10.845273   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:10.845342   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:10.845357   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:10.845364   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:10.849249   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:11.345450   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:11.345474   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:11.345485   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:11.345490   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:11.348615   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:11.845521   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:11.845544   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:11.845552   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:11.845557   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:11.851020   30630 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 03:21:12.345427   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:12.345455   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:12.345466   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:12.345473   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:12.348894   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:12.845773   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:12.845807   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:12.845815   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:12.845821   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:12.849096   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:12.849859   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:13.345600   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:13.345625   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:13.345635   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:13.345641   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:13.348986   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:13.845088   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:13.845115   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:13.845122   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:13.845126   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:13.848813   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:14.345772   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:14.345796   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:14.345804   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:14.345809   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:14.349538   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:14.845967   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:14.845999   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:14.846010   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:14.846015   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:14.849646   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:14.850106   30630 node_ready.go:53] node "ha-994751-m03" has status "Ready":"False"
	I1004 03:21:15.345479   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:15.345501   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:15.345509   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:15.345514   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:15.348633   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:15.845308   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:15.845329   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:15.845337   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:15.845342   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:15.848613   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.345615   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:16.345635   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.345697   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.345709   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.349189   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.845211   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:16.845234   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.845243   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.845247   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.848314   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.848965   30630 node_ready.go:49] node "ha-994751-m03" has status "Ready":"True"
	I1004 03:21:16.848983   30630 node_ready.go:38] duration metric: took 18.004075427s for node "ha-994751-m03" to be "Ready" ...
	I1004 03:21:16.848993   30630 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:21:16.849057   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:16.849066   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.849073   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.849077   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.855878   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:21:16.863339   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.863413   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-l6zst
	I1004 03:21:16.863421   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.863428   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.863432   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.866627   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:16.867225   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:16.867246   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.867254   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.867257   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.869745   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.870174   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.870189   30630 pod_ready.go:82] duration metric: took 6.828744ms for pod "coredns-7c65d6cfc9-l6zst" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.870197   30630 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.870257   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zgdck
	I1004 03:21:16.870266   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.870272   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.870277   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.872665   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.873280   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:16.873293   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.873300   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.873304   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.875767   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.876277   30630 pod_ready.go:93] pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.876299   30630 pod_ready.go:82] duration metric: took 6.094854ms for pod "coredns-7c65d6cfc9-zgdck" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.876312   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.876381   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751
	I1004 03:21:16.876394   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.876405   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.876415   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.878641   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.879297   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:16.879315   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.879323   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.879330   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.881505   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.881911   30630 pod_ready.go:93] pod "etcd-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.881925   30630 pod_ready.go:82] duration metric: took 5.606429ms for pod "etcd-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.881933   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.881973   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m02
	I1004 03:21:16.881980   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.881986   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.881991   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.884217   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.884882   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:16.884896   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:16.884903   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:16.884907   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:16.887109   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:16.887576   30630 pod_ready.go:93] pod "etcd-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:16.887592   30630 pod_ready.go:82] duration metric: took 5.65336ms for pod "etcd-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:16.887600   30630 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.046004   30630 request.go:632] Waited for 158.354973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m03
	I1004 03:21:17.046081   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-994751-m03
	I1004 03:21:17.046092   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.046103   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.046113   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.049599   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.245822   30630 request.go:632] Waited for 195.387196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:17.245913   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:17.245920   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.245929   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.245937   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.249684   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.250373   30630 pod_ready.go:93] pod "etcd-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:17.250391   30630 pod_ready.go:82] duration metric: took 362.785163ms for pod "etcd-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.250406   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.445530   30630 request.go:632] Waited for 195.055856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:21:17.445608   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751
	I1004 03:21:17.445614   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.445621   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.445627   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.449209   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.645177   30630 request.go:632] Waited for 195.266127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:17.645277   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:17.645290   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.645300   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.645307   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.648339   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:17.648978   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:17.648997   30630 pod_ready.go:82] duration metric: took 398.583614ms for pod "kube-apiserver-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.649010   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:17.845996   30630 request.go:632] Waited for 196.900731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:21:17.846073   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m02
	I1004 03:21:17.846082   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:17.846092   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:17.846097   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:17.849729   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.045771   30630 request.go:632] Waited for 195.364695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:18.045824   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:18.045829   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.045837   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.045843   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.049741   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.050457   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:18.050479   30630 pod_ready.go:82] duration metric: took 401.458645ms for pod "kube-apiserver-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.050491   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.245708   30630 request.go:632] Waited for 195.123371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m03
	I1004 03:21:18.245779   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-994751-m03
	I1004 03:21:18.245788   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.245798   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.245805   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.248803   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:18.445802   30630 request.go:632] Waited for 196.359557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:18.445880   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:18.445891   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.445903   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.445912   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.449153   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.449859   30630 pod_ready.go:93] pod "kube-apiserver-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:18.449875   30630 pod_ready.go:82] duration metric: took 399.376745ms for pod "kube-apiserver-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.449884   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.646109   30630 request.go:632] Waited for 196.148252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:21:18.646174   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751
	I1004 03:21:18.646181   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.646190   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.646196   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.649910   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.845959   30630 request.go:632] Waited for 195.355273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:18.846052   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:18.846066   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:18.846077   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:18.846084   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:18.849452   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:18.849983   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:18.849999   30630 pod_ready.go:82] duration metric: took 400.109282ms for pod "kube-controller-manager-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:18.850007   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.045892   30630 request.go:632] Waited for 195.812536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:21:19.045949   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m02
	I1004 03:21:19.045954   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.045962   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.045965   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.049481   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.245703   30630 request.go:632] Waited for 195.37604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:19.245795   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:19.245807   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.245816   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.245821   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.249221   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.249770   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:19.249786   30630 pod_ready.go:82] duration metric: took 399.773598ms for pod "kube-controller-manager-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.249797   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.445959   30630 request.go:632] Waited for 196.084722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m03
	I1004 03:21:19.446017   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-994751-m03
	I1004 03:21:19.446023   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.446030   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.446034   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.449595   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.646055   30630 request.go:632] Waited for 195.452676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:19.646103   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:19.646110   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.646121   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.646126   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.649308   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:19.649980   30630 pod_ready.go:93] pod "kube-controller-manager-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:19.650000   30630 pod_ready.go:82] duration metric: took 400.193489ms for pod "kube-controller-manager-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.650010   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9q6q2" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:19.846046   30630 request.go:632] Waited for 195.979747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q6q2
	I1004 03:21:19.846103   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9q6q2
	I1004 03:21:19.846109   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:19.846116   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:19.846121   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:19.850032   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.045346   30630 request.go:632] Waited for 194.290233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:20.045412   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:20.045419   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.045429   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.045435   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.049187   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.049735   30630 pod_ready.go:93] pod "kube-proxy-9q6q2" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:20.049758   30630 pod_ready.go:82] duration metric: took 399.740576ms for pod "kube-proxy-9q6q2" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.049773   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.245829   30630 request.go:632] Waited for 195.994651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:21:20.245916   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f44b9
	I1004 03:21:20.245926   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.245933   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.245938   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.248898   30630 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 03:21:20.445831   30630 request.go:632] Waited for 196.355752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:20.445904   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:20.445910   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.445921   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.445925   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.449843   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.450548   30630 pod_ready.go:93] pod "kube-proxy-f44b9" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:20.450575   30630 pod_ready.go:82] duration metric: took 400.789271ms for pod "kube-proxy-f44b9" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.450587   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.645991   30630 request.go:632] Waited for 195.320241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:21:20.646051   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ph6cf
	I1004 03:21:20.646061   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.646072   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.646084   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.649526   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.845351   30630 request.go:632] Waited for 195.084601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:20.845415   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:20.845423   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:20.845433   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:20.845439   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:20.849107   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:20.849683   30630 pod_ready.go:93] pod "kube-proxy-ph6cf" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:20.849702   30630 pod_ready.go:82] duration metric: took 399.106228ms for pod "kube-proxy-ph6cf" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:20.849714   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.046211   30630 request.go:632] Waited for 196.431281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:21:21.046274   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751
	I1004 03:21:21.046287   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.046297   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.046303   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.049644   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.245652   30630 request.go:632] Waited for 195.357611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:21.245701   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751
	I1004 03:21:21.245707   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.245717   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.245729   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.248937   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.249459   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:21.249477   30630 pod_ready.go:82] duration metric: took 399.754955ms for pod "kube-scheduler-ha-994751" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.249485   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.445624   30630 request.go:632] Waited for 196.058326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:21:21.445695   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m02
	I1004 03:21:21.445700   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.445708   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.445713   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.449658   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.645861   30630 request.go:632] Waited for 195.383024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:21.645947   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m02
	I1004 03:21:21.645959   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.646444   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.646457   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.649535   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:21.650129   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:21.650145   30630 pod_ready.go:82] duration metric: took 400.653773ms for pod "kube-scheduler-ha-994751-m02" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.650155   30630 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:21.846280   30630 request.go:632] Waited for 196.044885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m03
	I1004 03:21:21.846336   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-994751-m03
	I1004 03:21:21.846342   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:21.846349   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:21.846354   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:21.849713   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:22.045755   30630 request.go:632] Waited for 195.414064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:22.045827   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes/ha-994751-m03
	I1004 03:21:22.045834   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.045841   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.045847   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.049538   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:22.050359   30630 pod_ready.go:93] pod "kube-scheduler-ha-994751-m03" in "kube-system" namespace has status "Ready":"True"
	I1004 03:21:22.050378   30630 pod_ready.go:82] duration metric: took 400.213357ms for pod "kube-scheduler-ha-994751-m03" in "kube-system" namespace to be "Ready" ...
	I1004 03:21:22.050389   30630 pod_ready.go:39] duration metric: took 5.201387664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:21:22.050412   30630 api_server.go:52] waiting for apiserver process to appear ...
	I1004 03:21:22.050477   30630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:21:22.066998   30630 api_server.go:72] duration metric: took 23.53720299s to wait for apiserver process to appear ...
	I1004 03:21:22.067023   30630 api_server.go:88] waiting for apiserver healthz status ...
	I1004 03:21:22.067042   30630 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I1004 03:21:22.074791   30630 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I1004 03:21:22.074864   30630 round_trippers.go:463] GET https://192.168.39.65:8443/version
	I1004 03:21:22.074872   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.074885   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.074896   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.075865   30630 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1004 03:21:22.075921   30630 api_server.go:141] control plane version: v1.31.1
	I1004 03:21:22.075934   30630 api_server.go:131] duration metric: took 8.905409ms to wait for apiserver health ...
	I1004 03:21:22.075941   30630 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 03:21:22.245389   30630 request.go:632] Waited for 169.386949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.245481   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.245490   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.245505   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.245516   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.251617   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:21:22.258944   30630 system_pods.go:59] 24 kube-system pods found
	I1004 03:21:22.258969   30630 system_pods.go:61] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:21:22.258974   30630 system_pods.go:61] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:21:22.258980   30630 system_pods.go:61] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:21:22.258984   30630 system_pods.go:61] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:21:22.258987   30630 system_pods.go:61] "etcd-ha-994751-m03" [610c4e0c-9af8-441e-9524-ccd6fe6fe390] Running
	I1004 03:21:22.258990   30630 system_pods.go:61] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:21:22.258992   30630 system_pods.go:61] "kindnet-clt5p" [a904ebc8-f149-4b9f-9637-a37cb56af836] Running
	I1004 03:21:22.258994   30630 system_pods.go:61] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:21:22.258997   30630 system_pods.go:61] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:21:22.259012   30630 system_pods.go:61] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:21:22.259017   30630 system_pods.go:61] "kube-apiserver-ha-994751-m03" [42150ae1-b298-4974-976f-05e9a2a32154] Running
	I1004 03:21:22.259020   30630 system_pods.go:61] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:21:22.259023   30630 system_pods.go:61] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:21:22.259027   30630 system_pods.go:61] "kube-controller-manager-ha-994751-m03" [5897468d-7872-4fed-81bc-bf9b37e42ef4] Running
	I1004 03:21:22.259030   30630 system_pods.go:61] "kube-proxy-9q6q2" [a3b96ca0-fe8c-4492-a05c-5f8ff9cb8d3f] Running
	I1004 03:21:22.259033   30630 system_pods.go:61] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:21:22.259036   30630 system_pods.go:61] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:21:22.259039   30630 system_pods.go:61] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:21:22.259042   30630 system_pods.go:61] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:21:22.259046   30630 system_pods.go:61] "kube-scheduler-ha-994751-m03" [f53fda60-a075-4f78-a64b-52e960a4b28b] Running
	I1004 03:21:22.259048   30630 system_pods.go:61] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:21:22.259051   30630 system_pods.go:61] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:21:22.259054   30630 system_pods.go:61] "kube-vip-ha-994751-m03" [9ec22347-f3d6-419e-867a-0de177976203] Running
	I1004 03:21:22.259056   30630 system_pods.go:61] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:21:22.259062   30630 system_pods.go:74] duration metric: took 183.116626ms to wait for pod list to return data ...
	I1004 03:21:22.259072   30630 default_sa.go:34] waiting for default service account to be created ...
	I1004 03:21:22.445504   30630 request.go:632] Waited for 186.355323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:21:22.445557   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/default/serviceaccounts
	I1004 03:21:22.445563   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.445570   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.445575   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.449437   30630 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 03:21:22.449567   30630 default_sa.go:45] found service account: "default"
	I1004 03:21:22.449589   30630 default_sa.go:55] duration metric: took 190.510625ms for default service account to be created ...
	I1004 03:21:22.449599   30630 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 03:21:22.646023   30630 request.go:632] Waited for 196.345892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.646077   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/namespaces/kube-system/pods
	I1004 03:21:22.646096   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.646106   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.646115   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.652169   30630 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 03:21:22.660351   30630 system_pods.go:86] 24 kube-system pods found
	I1004 03:21:22.660376   30630 system_pods.go:89] "coredns-7c65d6cfc9-l6zst" [554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab] Running
	I1004 03:21:22.660386   30630 system_pods.go:89] "coredns-7c65d6cfc9-zgdck" [dcd6ed49-8491-4eb0-9863-b498c76ec3c5] Running
	I1004 03:21:22.660391   30630 system_pods.go:89] "etcd-ha-994751" [ad26bfe8-b3b3-44fb-8f83-fd4f62a92ea6] Running
	I1004 03:21:22.660395   30630 system_pods.go:89] "etcd-ha-994751-m02" [540bc27b-d1ee-48e7-99a3-5daf036ec06f] Running
	I1004 03:21:22.660398   30630 system_pods.go:89] "etcd-ha-994751-m03" [610c4e0c-9af8-441e-9524-ccd6fe6fe390] Running
	I1004 03:21:22.660402   30630 system_pods.go:89] "kindnet-2mhh2" [442d5ad9-dc9c-4a07-90b3-549591f9d2f1] Running
	I1004 03:21:22.660405   30630 system_pods.go:89] "kindnet-clt5p" [a904ebc8-f149-4b9f-9637-a37cb56af836] Running
	I1004 03:21:22.660408   30630 system_pods.go:89] "kindnet-rmcvt" [08ef4494-9229-4cd3-b22e-10709b88e14c] Running
	I1004 03:21:22.660412   30630 system_pods.go:89] "kube-apiserver-ha-994751" [68f6b078-f7ae-4c2d-b424-372647c7d203] Running
	I1004 03:21:22.660416   30630 system_pods.go:89] "kube-apiserver-ha-994751-m02" [f4773005-30fa-46bc-b372-bbd0c7bfc1f1] Running
	I1004 03:21:22.660419   30630 system_pods.go:89] "kube-apiserver-ha-994751-m03" [42150ae1-b298-4974-976f-05e9a2a32154] Running
	I1004 03:21:22.660423   30630 system_pods.go:89] "kube-controller-manager-ha-994751" [7a09373f-d6d5-4fa2-bc68-0f2ba151761f] Running
	I1004 03:21:22.660426   30630 system_pods.go:89] "kube-controller-manager-ha-994751-m02" [31278e89-fb33-4713-b0e4-3b589944caa9] Running
	I1004 03:21:22.660432   30630 system_pods.go:89] "kube-controller-manager-ha-994751-m03" [5897468d-7872-4fed-81bc-bf9b37e42ef4] Running
	I1004 03:21:22.660437   30630 system_pods.go:89] "kube-proxy-9q6q2" [a3b96ca0-fe8c-4492-a05c-5f8ff9cb8d3f] Running
	I1004 03:21:22.660440   30630 system_pods.go:89] "kube-proxy-f44b9" [e3e1a917-0150-4608-b5f3-b590d330d2ce] Running
	I1004 03:21:22.660443   30630 system_pods.go:89] "kube-proxy-ph6cf" [0eef5964-876b-4cee-ad4c-c93ab034d3f9] Running
	I1004 03:21:22.660450   30630 system_pods.go:89] "kube-scheduler-ha-994751" [527e5bff-7234-4ab0-9952-fe9ec87ab01a] Running
	I1004 03:21:22.660453   30630 system_pods.go:89] "kube-scheduler-ha-994751-m02" [9c89432d-e607-4e86-8d68-12ee0c2f3170] Running
	I1004 03:21:22.660456   30630 system_pods.go:89] "kube-scheduler-ha-994751-m03" [f53fda60-a075-4f78-a64b-52e960a4b28b] Running
	I1004 03:21:22.660465   30630 system_pods.go:89] "kube-vip-ha-994751" [7955e17e-6c22-49b3-aa6a-9d37b9bc7942] Running
	I1004 03:21:22.660470   30630 system_pods.go:89] "kube-vip-ha-994751-m02" [944f1844-b8c2-410e-981d-3705beb22638] Running
	I1004 03:21:22.660473   30630 system_pods.go:89] "kube-vip-ha-994751-m03" [9ec22347-f3d6-419e-867a-0de177976203] Running
	I1004 03:21:22.660476   30630 system_pods.go:89] "storage-provisioner" [cc60903f-91b9-4e59-92ab-9f16c09d38d2] Running
	I1004 03:21:22.660481   30630 system_pods.go:126] duration metric: took 210.876444ms to wait for k8s-apps to be running ...
	I1004 03:21:22.660493   30630 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 03:21:22.660540   30630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:21:22.675933   30630 system_svc.go:56] duration metric: took 15.434198ms WaitForService to wait for kubelet
	I1004 03:21:22.675957   30630 kubeadm.go:582] duration metric: took 24.146164676s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:21:22.675972   30630 node_conditions.go:102] verifying NodePressure condition ...
	I1004 03:21:22.845860   30630 request.go:632] Waited for 169.820621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.65:8443/api/v1/nodes
	I1004 03:21:22.845932   30630 round_trippers.go:463] GET https://192.168.39.65:8443/api/v1/nodes
	I1004 03:21:22.845941   30630 round_trippers.go:469] Request Headers:
	I1004 03:21:22.845948   30630 round_trippers.go:473]     Accept: application/json, */*
	I1004 03:21:22.845959   30630 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 03:21:22.850058   30630 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 03:21:22.851493   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:21:22.851511   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:21:22.851521   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:21:22.851525   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:21:22.851529   30630 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 03:21:22.851534   30630 node_conditions.go:123] node cpu capacity is 2
	I1004 03:21:22.851538   30630 node_conditions.go:105] duration metric: took 175.561582ms to run NodePressure ...
	I1004 03:21:22.851551   30630 start.go:241] waiting for startup goroutines ...
	I1004 03:21:22.851569   30630 start.go:255] writing updated cluster config ...
	I1004 03:21:22.851861   30630 ssh_runner.go:195] Run: rm -f paused
	I1004 03:21:22.904494   30630 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 03:21:22.906685   30630 out.go:177] * Done! kubectl is now configured to use "ha-994751" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.909644371Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-vh5j6,Uid:1e13c9e5-3c5b-47b9-8f41-391304b4184c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728012084122158637,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T03:21:23.807271406Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cc60903f-91b9-4e59-92ab-9f16c09d38d2,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1728011946640149577,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-04T03:19:06.314614114Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-zgdck,Uid:dcd6ed49-8491-4eb0-9863-b498c76ec3c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011946639081079,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T03:19:06.316385216Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-l6zst,Uid:554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1728011946615050433,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5bd8-44d6-a7dd-6eb87ef3b9ab,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T03:19:06.307604522Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&PodSandboxMetadata{Name:kindnet-2mhh2,Uid:442d5ad9-dc9c-4a07-90b3-549591f9d2f1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011934078857830,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-10-04T03:18:52.255087227Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&PodSandboxMetadata{Name:kube-proxy-f44b9,Uid:e3e1a917-0150-4608-b5f3-b590d330d2ce,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011934041548754,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T03:18:52.233691775Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-994751,Uid:d09d862da2ecf4fa4a0cc55773908218,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1728011921072544695,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d09d862da2ecf4fa4a0cc55773908218,kubernetes.io/config.seen: 2024-10-04T03:18:40.378105659Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-994751,Uid:940a4ffe37e8a399065ce324e2a3e96a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011921066325762,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{kubernetes.io/config.hash: 940a
4ffe37e8a399065ce324e2a3e96a,kubernetes.io/config.seen: 2024-10-04T03:18:40.378106459Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-994751,Uid:c779652e8162a5324e798545569be164,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011921058626396,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c779652e8162a5324e798545569be164,kubernetes.io/config.seen: 2024-10-04T03:18:40.378104500Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-994751,Ui
d:ca68d6f5cb32227962ccd27f257d0736,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011921056535594,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.65:8443,kubernetes.io/config.hash: ca68d6f5cb32227962ccd27f257d0736,kubernetes.io/config.seen: 2024-10-04T03:18:40.378102927Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&PodSandboxMetadata{Name:etcd-ha-994751,Uid:15f64e9e1b892e5a5392a0aa1691bb56,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728011921055240968,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-994751,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.65:2379,kubernetes.io/config.hash: 15f64e9e1b892e5a5392a0aa1691bb56,kubernetes.io/config.seen: 2024-10-04T03:18:40.378098560Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=21db3e66-4442-4793-9172-1287b650377b name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.910440665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e11566f-a8d0-4c57-99a4-2dfb11aa0541 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.910514144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e11566f-a8d0-4c57-99a4-2dfb11aa0541 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.910914008Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e11566f-a8d0-4c57-99a4-2dfb11aa0541 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.928420796Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddd70dbf-40a5-4402-bfc1-e153e6310a1d name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.928516375Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddd70dbf-40a5-4402-bfc1-e153e6310a1d name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.929836804Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b77196d-924e-4d71-9715-e6df68391488 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.930340838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012323930313672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b77196d-924e-4d71-9715-e6df68391488 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.931037935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45e82d0e-8d34-4e21-9cd3-2e21f25e8fdc name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.931089645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45e82d0e-8d34-4e21-9cd3-2e21f25e8fdc name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.931334224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=45e82d0e-8d34-4e21-9cd3-2e21f25e8fdc name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.977152775Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68080e1b-5e59-4468-9f1f-66b6d214e95c name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.977269984Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68080e1b-5e59-4468-9f1f-66b6d214e95c name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.978528093Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=abe6883d-7d0b-4765-b0e1-014e86400e54 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.979019660Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012323978993207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=abe6883d-7d0b-4765-b0e1-014e86400e54 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.979678042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f90621b-7d5a-4226-b5b7-1e33fc21a505 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.979745259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f90621b-7d5a-4226-b5b7-1e33fc21a505 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:23 ha-994751 crio[664]: time="2024-10-04 03:25:23.980056704Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f90621b-7d5a-4226-b5b7-1e33fc21a505 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:24 ha-994751 crio[664]: time="2024-10-04 03:25:24.019489841Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2037137c-8db3-4591-818a-c9134b9b1c44 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:24 ha-994751 crio[664]: time="2024-10-04 03:25:24.019564259Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2037137c-8db3-4591-818a-c9134b9b1c44 name=/runtime.v1.RuntimeService/Version
	Oct 04 03:25:24 ha-994751 crio[664]: time="2024-10-04 03:25:24.020987250Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6a1882a-6aa0-49b9-860f-f7608917e560 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:24 ha-994751 crio[664]: time="2024-10-04 03:25:24.021540931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012324021513729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6a1882a-6aa0-49b9-860f-f7608917e560 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 03:25:24 ha-994751 crio[664]: time="2024-10-04 03:25:24.022276847Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa156a3a-41e5-4cfd-84f6-21ed8e10d940 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:24 ha-994751 crio[664]: time="2024-10-04 03:25:24.022351347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa156a3a-41e5-4cfd-84f6-21ed8e10d940 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 03:25:24 ha-994751 crio[664]: time="2024-10-04 03:25:24.022924780Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dd8849f48bb11b35bb1f5bd585512cb0b89dc44b163449a6b50f15f54de02a5,PodSandboxId:21e8386b77b623ce7a194e32d220a20826004b82457da3f7424ad5a839c760c0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728012087721104622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vh5j6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1e13c9e5-3c5b-47b9-8f41-391304b4184c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586,PodSandboxId:be9b34d6ca0bfe5339f5ceb0f349f24cbd1a48e211f43c8c1e28b9b13895ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946866667557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zgdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd6ed49-8491-4eb0-9863-b498c76ec3c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe1e8ec5dfe43f5131ba996561aaabff07b86db463b8b1773b21b5e5c2d1261,PodSandboxId:dab235bc541ca107185a99b0769b6acbee2c5bec48be93c12dd27dd933ee2035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728011946910404717,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: cc60903f-91b9-4e59-92ab-9f16c09d38d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd,PodSandboxId:d9a5ca3b325fa3036d1d784653e1987f4f8d4138ee347b0d77697996f0147e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728011946858124672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6zst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554a3e5d-5b
d8-44d6-a7dd-6eb87ef3b9ab,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99,PodSandboxId:454652c11f4fe385025f37c927ad3c932967e955f13ce98c9d8026eaf6d85acc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17280119
34650609445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2mhh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442d5ad9-dc9c-4a07-90b3-549591f9d2f1,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160,PodSandboxId:44f2b282edd576de3d1a7801835542f67f00b1aab4d9fe71f5393e0a81cccace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728011934180469533,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f44b9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e1a917-0150-4608-b5f3-b590d330d2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f,PodSandboxId:5461b35eef9c327fc2a76da2a67aa04ae71e3c3fb7e49230394ecad7898bb925,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728011924453506492,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 940a4ffe37e8a399065ce324e2a3e96a,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec,PodSandboxId:0372e9d489f059215842c431e83a2b46ba88cfd7944a0cbe38dd18d947a00350,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728011921323682309,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d09d862da2ecf4fa4a0cc55773908218,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec,PodSandboxId:c61920ab308f6b22af721e54b8ea837b4699df3b3d7d0ae41a6495912eddca62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728011921323115948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-994751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f64e9e1b892e5a5392a0aa1691bb56,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe,PodSandboxId:6d7ea048eea90bead74720b8675083bf27bc30e31606cde013ebada7dec5990c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728011921253183954,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-994751,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ca68d6f5cb32227962ccd27f257d0736,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8,PodSandboxId:8c1c0f1b1a4301dfb05fc4b9505aafe57623d572f5cc4a835732f711b5b6955b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728011921250593100,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-994751,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c779652e8162a5324e798545569be164,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa156a3a-41e5-4cfd-84f6-21ed8e10d940 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9dd8849f48bb1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   21e8386b77b62       busybox-7dff88458-vh5j6
	2fe1e8ec5dfe4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   dab235bc541ca       storage-provisioner
	eb082a979b36c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   be9b34d6ca0bf       coredns-7c65d6cfc9-zgdck
	93aa8fd39f9c0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   d9a5ca3b325fa       coredns-7c65d6cfc9-l6zst
	6a3f40105608f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   454652c11f4fe       kindnet-2mhh2
	731622c5caa6f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   44f2b282edd57       kube-proxy-f44b9
	8830f0c28d759       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   5461b35eef9c3       kube-vip-ha-994751
	e49d081b73667       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   0372e9d489f05       kube-scheduler-ha-994751
	f5568cb7839e2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   c61920ab308f6       etcd-ha-994751
	849282c506754       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   6d7ea048eea90       kube-apiserver-ha-994751
	f041d718c872f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   8c1c0f1b1a430       kube-controller-manager-ha-994751
	
	
	==> coredns [93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd] <==
	[INFO] 10.244.2.2:42178 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.010745169s
	[INFO] 10.244.2.2:34829 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009009564s
	[INFO] 10.244.0.4:43910 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001485572s
	[INFO] 10.244.1.2:45378 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000181404s
	[INFO] 10.244.1.2:40886 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001942971s
	[INFO] 10.244.2.2:45461 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217787s
	[INFO] 10.244.2.2:56545 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000167289s
	[INFO] 10.244.2.2:52063 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000246892s
	[INFO] 10.244.0.4:48765 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150103s
	[INFO] 10.244.1.2:53871 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168625s
	[INFO] 10.244.1.2:58325 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001736755s
	[INFO] 10.244.1.2:38700 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085818s
	[INFO] 10.244.2.2:53525 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016163s
	[INFO] 10.244.2.2:55339 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126355s
	[INFO] 10.244.0.4:33506 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176834s
	[INFO] 10.244.0.4:47714 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136674s
	[INFO] 10.244.0.4:49593 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139876s
	[INFO] 10.244.1.2:51243 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137889s
	[INFO] 10.244.2.2:56043 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000221873s
	[INFO] 10.244.2.2:35783 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138959s
	[INFO] 10.244.0.4:37503 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013937s
	[INFO] 10.244.0.4:46310 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132408s
	[INFO] 10.244.0.4:35014 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074557s
	[INFO] 10.244.1.2:51803 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153481s
	[INFO] 10.244.1.2:47758 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000198394s
	
	
	==> coredns [eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586] <==
	[INFO] 10.244.2.2:43924 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.01283325s
	[INFO] 10.244.2.2:35798 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148903s
	[INFO] 10.244.0.4:59562 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140549s
	[INFO] 10.244.0.4:41362 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002209213s
	[INFO] 10.244.0.4:41786 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133758s
	[INFO] 10.244.0.4:49269 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001539557s
	[INFO] 10.244.0.4:56941 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018736s
	[INFO] 10.244.0.4:47984 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173422s
	[INFO] 10.244.0.4:41970 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061431s
	[INFO] 10.244.1.2:32918 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119893s
	[INFO] 10.244.1.2:39792 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093113s
	[INFO] 10.244.1.2:41331 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001259323s
	[INFO] 10.244.1.2:45464 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106483s
	[INFO] 10.244.1.2:35852 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153198s
	[INFO] 10.244.2.2:38240 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114031s
	[INFO] 10.244.2.2:54004 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008059s
	[INFO] 10.244.0.4:39542 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092418s
	[INFO] 10.244.1.2:41262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166812s
	[INFO] 10.244.1.2:55889 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146278s
	[INFO] 10.244.1.2:35654 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131643s
	[INFO] 10.244.2.2:37029 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012813s
	[INFO] 10.244.2.2:33774 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000223324s
	[INFO] 10.244.0.4:38911 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138291s
	[INFO] 10.244.1.2:56619 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093621s
	[INFO] 10.244.1.2:33622 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154511s
	
	
	==> describe nodes <==
	Name:               ha-994751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T03_18_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:18:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:18:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:18:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:18:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:21:51 +0000   Fri, 04 Oct 2024 03:19:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    ha-994751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7452b105a68246eeb61757acefd7f693
	  System UUID:                7452b105-a682-46ee-b617-57acefd7f693
	  Boot ID:                    aecf415c-e5c2-46a9-81d5-d95311218d51
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vh5j6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 coredns-7c65d6cfc9-l6zst             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m32s
	  kube-system                 coredns-7c65d6cfc9-zgdck             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m32s
	  kube-system                 etcd-ha-994751                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m37s
	  kube-system                 kindnet-2mhh2                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m32s
	  kube-system                 kube-apiserver-ha-994751             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-controller-manager-ha-994751    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-proxy-f44b9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-scheduler-ha-994751             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-vip-ha-994751                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m29s  kube-proxy       
	  Normal  Starting                 6m37s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m37s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m37s  kubelet          Node ha-994751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s  kubelet          Node ha-994751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s  kubelet          Node ha-994751 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m33s  node-controller  Node ha-994751 event: Registered Node ha-994751 in Controller
	  Normal  NodeReady                6m18s  kubelet          Node ha-994751 status is now: NodeReady
	  Normal  RegisteredNode           5m37s  node-controller  Node ha-994751 event: Registered Node ha-994751 in Controller
	  Normal  RegisteredNode           4m21s  node-controller  Node ha-994751 event: Registered Node ha-994751 in Controller
	
	
	Name:               ha-994751-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_04T03_19_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:19:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:22:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 04 Oct 2024 03:21:42 +0000   Fri, 04 Oct 2024 03:23:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    ha-994751-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6683e6a9e1244f787f84f2a5c1bf490
	  System UUID:                f6683e6a-9e12-44f7-87f8-4f2a5c1bf490
	  Boot ID:                    8b02ddc0-820d-4de5-b649-7e2202f66ea5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wc5kg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 etcd-ha-994751-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m43s
	  kube-system                 kindnet-rmcvt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m45s
	  kube-system                 kube-apiserver-ha-994751-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-controller-manager-ha-994751-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-proxy-ph6cf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-scheduler-ha-994751-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-vip-ha-994751-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m45s (x8 over 5m45s)  kubelet          Node ha-994751-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m45s (x8 over 5m45s)  kubelet          Node ha-994751-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m45s (x7 over 5m45s)  kubelet          Node ha-994751-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m43s                  node-controller  Node ha-994751-m02 event: Registered Node ha-994751-m02 in Controller
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-994751-m02 event: Registered Node ha-994751-m02 in Controller
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-994751-m02 event: Registered Node ha-994751-m02 in Controller
	  Normal  NodeNotReady             2m                     node-controller  Node ha-994751-m02 status is now: NodeNotReady
	
	
	Name:               ha-994751-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_04T03_20_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:20:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:20:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:20:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:20:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:21:56 +0000   Fri, 04 Oct 2024 03:21:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-994751-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 df18b27d8a2e4c8893a601b97ec7e8e0
	  System UUID:                df18b27d-8a2e-4c88-93a6-01b97ec7e8e0
	  Boot ID:                    138aa962-c7a2-47ea-82c1-2a5ccfbc3de0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nrdqk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 etcd-ha-994751-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m28s
	  kube-system                 kindnet-clt5p                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m30s
	  kube-system                 kube-apiserver-ha-994751-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-controller-manager-ha-994751-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 kube-proxy-9q6q2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-scheduler-ha-994751-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-vip-ha-994751-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m24s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m30s (x8 over 4m30s)  kubelet          Node ha-994751-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m30s (x8 over 4m30s)  kubelet          Node ha-994751-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m30s (x7 over 4m30s)  kubelet          Node ha-994751-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-994751-m03 event: Registered Node ha-994751-m03 in Controller
	  Normal  RegisteredNode           4m26s                  node-controller  Node ha-994751-m03 event: Registered Node ha-994751-m03 in Controller
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-994751-m03 event: Registered Node ha-994751-m03 in Controller
	
	
	Name:               ha-994751-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-994751-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=ha-994751
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_04T03_22_03_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:22:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-994751-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:25:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:22:33 +0000   Fri, 04 Oct 2024 03:22:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-994751-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d61802e745d4414c8e0a1c3e5c1319f7
	  System UUID:                d61802e7-45d4-414c-8e0a-1c3e5c1319f7
	  Boot ID:                    f154d01f-d315-40b5-84e6-0d0b669735cf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-sggz9       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m21s
	  kube-system                 kube-proxy-xsz4w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m15s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  3m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m21s                  node-controller  Node ha-994751-m04 event: Registered Node ha-994751-m04 in Controller
	  Normal  RegisteredNode           3m21s                  node-controller  Node ha-994751-m04 event: Registered Node ha-994751-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m21s (x2 over 3m22s)  kubelet          Node ha-994751-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m21s (x2 over 3m22s)  kubelet          Node ha-994751-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m21s (x2 over 3m22s)  kubelet          Node ha-994751-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-994751-m04 event: Registered Node ha-994751-m04 in Controller
	  Normal  NodeReady                3m                     kubelet          Node ha-994751-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 4 03:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050646] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040240] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.800548] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.470270] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.581508] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.982603] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.059297] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061306] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.198058] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.129574] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.276832] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.888308] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +3.806908] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.054958] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.117103] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.085956] kauditd_printk_skb: 79 callbacks suppressed
	[  +7.063470] kauditd_printk_skb: 21 callbacks suppressed
	[Oct 4 03:19] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.285701] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec] <==
	{"level":"warn","ts":"2024-10-04T03:25:24.174241Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.182271Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.117:2380/version","remote-member-id":"895cda09ee52f930","error":"Get \"https://192.168.39.117:2380/version\": dial tcp 192.168.39.117:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-10-04T03:25:24.182325Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"895cda09ee52f930","error":"Get \"https://192.168.39.117:2380/version\": dial tcp 192.168.39.117:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-10-04T03:25:24.329324Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.341415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.348181Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.354559Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.358128Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.362106Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.368522Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.373708Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.375562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.383141Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.384045Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.387813Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.391522Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.398233Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.405450Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.412645Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.416789Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.421896Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.427697Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.435383Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.447152Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-04T03:25:24.474292Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c17fb7325889e027","from":"c17fb7325889e027","remote-peer-id":"895cda09ee52f930","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 03:25:24 up 7 min,  0 users,  load average: 0.11, 0.15, 0.09
	Linux ha-994751 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99] <==
	I1004 03:24:46.000568       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	I1004 03:24:55.996427       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1004 03:24:55.996581       1 main.go:299] handling current node
	I1004 03:24:55.996609       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I1004 03:24:55.996628       1 main.go:322] Node ha-994751-m02 has CIDR [10.244.1.0/24] 
	I1004 03:24:55.996891       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1004 03:24:55.997045       1 main.go:322] Node ha-994751-m03 has CIDR [10.244.2.0/24] 
	I1004 03:24:55.997190       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I1004 03:24:55.997280       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	I1004 03:25:05.999244       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I1004 03:25:05.999341       1 main.go:322] Node ha-994751-m02 has CIDR [10.244.1.0/24] 
	I1004 03:25:05.999525       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1004 03:25:05.999565       1 main.go:322] Node ha-994751-m03 has CIDR [10.244.2.0/24] 
	I1004 03:25:05.999630       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I1004 03:25:05.999660       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	I1004 03:25:05.999742       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1004 03:25:05.999771       1 main.go:299] handling current node
	I1004 03:25:16.002618       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1004 03:25:16.002727       1 main.go:299] handling current node
	I1004 03:25:16.002764       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I1004 03:25:16.002782       1 main.go:322] Node ha-994751-m02 has CIDR [10.244.1.0/24] 
	I1004 03:25:16.003010       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1004 03:25:16.003037       1 main.go:322] Node ha-994751-m03 has CIDR [10.244.2.0/24] 
	I1004 03:25:16.003121       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I1004 03:25:16.003140       1 main.go:322] Node ha-994751-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe] <==
	I1004 03:18:46.533293       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:18:46.536324       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 03:18:46.567509       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.65]
	I1004 03:18:46.569728       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:18:46.579199       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:18:47.324394       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:18:47.342483       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 03:18:47.354293       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:18:52.030260       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:18:52.131882       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1004 03:21:29.605335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53690: use of closed network connection
	E1004 03:21:29.795618       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53702: use of closed network connection
	E1004 03:21:29.974284       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53722: use of closed network connection
	E1004 03:21:30.184885       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53734: use of closed network connection
	E1004 03:21:30.399362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53748: use of closed network connection
	E1004 03:21:30.586499       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53770: use of closed network connection
	E1004 03:21:30.773657       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53776: use of closed network connection
	E1004 03:21:30.946921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53796: use of closed network connection
	E1004 03:21:31.140751       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53812: use of closed network connection
	E1004 03:21:31.439406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53848: use of closed network connection
	E1004 03:21:31.610289       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53874: use of closed network connection
	E1004 03:21:31.791527       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53896: use of closed network connection
	E1004 03:21:31.973829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53924: use of closed network connection
	E1004 03:21:32.157183       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53938: use of closed network connection
	E1004 03:21:32.326553       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53952: use of closed network connection
	
	
	==> kube-controller-manager [f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8] <==
	I1004 03:22:03.059069       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-994751-m04" podCIDRs=["10.244.3.0/24"]
	I1004 03:22:03.059118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.061876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.076574       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.137039       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.276697       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.662795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:03.977537       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:04.044472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:06.344839       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:06.345923       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-994751-m04"
	I1004 03:22:06.383881       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:13.412719       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:24.487665       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-994751-m04"
	I1004 03:22:24.487754       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:24.502742       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:26.362397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:22:33.863379       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m04"
	I1004 03:23:24.007837       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-994751-m04"
	I1004 03:23:24.008551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	I1004 03:23:24.038687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	I1004 03:23:24.187288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.759379ms"
	I1004 03:23:24.187415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.69µs"
	I1004 03:23:26.454826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	I1004 03:23:29.201808       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-994751-m02"
	
	
	==> kube-proxy [731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:18:54.520708       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:18:54.543515       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.65"]
	E1004 03:18:54.543642       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:18:54.585531       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:18:54.585592       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:18:54.585623       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:18:54.595069       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:18:54.598246       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:18:54.598343       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:18:54.602801       1 config.go:199] "Starting service config controller"
	I1004 03:18:54.603172       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:18:54.603521       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:18:54.603587       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:18:54.607605       1 config.go:328] "Starting node config controller"
	I1004 03:18:54.607621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:18:54.704654       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:18:54.704732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 03:18:54.707708       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec] <==
	W1004 03:18:45.760588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:18:45.760709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:18:45.902575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:18:45.902704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 03:18:45.937221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:18:45.937512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 03:18:46.030883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 03:18:46.031049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1004 03:18:48.095287       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1004 03:22:03.109132       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zh45q\": pod kindnet-zh45q is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zh45q" node="ha-994751-m04"
	E1004 03:22:03.113875       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cc0c3789-7dca-4ede-a355-9ac6d9db68c2(kube-system/kindnet-zh45q) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-zh45q"
	E1004 03:22:03.114052       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zh45q\": pod kindnet-zh45q is already assigned to node \"ha-994751-m04\"" pod="kube-system/kindnet-zh45q"
	I1004 03:22:03.114143       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zh45q" node="ha-994751-m04"
	E1004 03:22:03.121368       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xsz4w\": pod kube-proxy-xsz4w is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xsz4w" node="ha-994751-m04"
	E1004 03:22:03.121569       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4f6e672a-e80b-4f45-b3a5-98dfa1ebaad3(kube-system/kube-proxy-xsz4w) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xsz4w"
	E1004 03:22:03.121624       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xsz4w\": pod kube-proxy-xsz4w is already assigned to node \"ha-994751-m04\"" pod="kube-system/kube-proxy-xsz4w"
	I1004 03:22:03.121686       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xsz4w" node="ha-994751-m04"
	E1004 03:22:03.177157       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zbb9z\": pod kube-proxy-zbb9z is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zbb9z" node="ha-994751-m04"
	E1004 03:22:03.177330       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a7948b15-0522-4cbd-8803-8c139b2e791a(kube-system/kube-proxy-zbb9z) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-zbb9z"
	E1004 03:22:03.177379       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zbb9z\": pod kube-proxy-zbb9z is already assigned to node \"ha-994751-m04\"" pod="kube-system/kube-proxy-zbb9z"
	I1004 03:22:03.177445       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zbb9z" node="ha-994751-m04"
	E1004 03:22:03.177921       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qfb5r\": pod kindnet-qfb5r is already assigned to node \"ha-994751-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qfb5r" node="ha-994751-m04"
	E1004 03:22:03.181030       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 085d0454-1ccc-408e-ae12-366c29ab0a15(kube-system/kindnet-qfb5r) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qfb5r"
	E1004 03:22:03.181113       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qfb5r\": pod kindnet-qfb5r is already assigned to node \"ha-994751-m04\"" pod="kube-system/kindnet-qfb5r"
	I1004 03:22:03.181162       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qfb5r" node="ha-994751-m04"
	
	
	==> kubelet <==
	Oct 04 03:23:47 ha-994751 kubelet[1305]: E1004 03:23:47.373529    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012227373073617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:23:47 ha-994751 kubelet[1305]: E1004 03:23:47.373558    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012227373073617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:23:57 ha-994751 kubelet[1305]: E1004 03:23:57.376221    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012237375404117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:23:57 ha-994751 kubelet[1305]: E1004 03:23:57.376607    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012237375404117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:07 ha-994751 kubelet[1305]: E1004 03:24:07.379453    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012247378682972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:07 ha-994751 kubelet[1305]: E1004 03:24:07.379509    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012247378682972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:17 ha-994751 kubelet[1305]: E1004 03:24:17.381784    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012257381348480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:17 ha-994751 kubelet[1305]: E1004 03:24:17.382305    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012257381348480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:27 ha-994751 kubelet[1305]: E1004 03:24:27.387309    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012267384211934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:27 ha-994751 kubelet[1305]: E1004 03:24:27.387674    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012267384211934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:37 ha-994751 kubelet[1305]: E1004 03:24:37.389662    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012277389023499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:37 ha-994751 kubelet[1305]: E1004 03:24:37.390147    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012277389023499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:47 ha-994751 kubelet[1305]: E1004 03:24:47.337368    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 03:24:47 ha-994751 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 03:24:47 ha-994751 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 03:24:47 ha-994751 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 03:24:47 ha-994751 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 03:24:47 ha-994751 kubelet[1305]: E1004 03:24:47.393080    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012287392471580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:47 ha-994751 kubelet[1305]: E1004 03:24:47.393113    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012287392471580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:57 ha-994751 kubelet[1305]: E1004 03:24:57.395248    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012297394773017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:24:57 ha-994751 kubelet[1305]: E1004 03:24:57.395590    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012297394773017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:25:07 ha-994751 kubelet[1305]: E1004 03:25:07.398270    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012307397806386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:25:07 ha-994751 kubelet[1305]: E1004 03:25:07.398317    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012307397806386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:25:17 ha-994751 kubelet[1305]: E1004 03:25:17.401131    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012317400306587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 03:25:17 ha-994751 kubelet[1305]: E1004 03:25:17.401184    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728012317400306587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-994751 -n ha-994751
helpers_test.go:261: (dbg) Run:  kubectl --context ha-994751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (363.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-994751 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-994751 -v=7 --alsologtostderr
E1004 03:27:08.994612   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:27:15.014651   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-994751 -v=7 --alsologtostderr: exit status 82 (2m1.936681034s)

                                                
                                                
-- stdout --
	* Stopping node "ha-994751-m04"  ...
	* Stopping node "ha-994751-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:25:25.557728   35894 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:25:25.557949   35894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:25:25.557957   35894 out.go:358] Setting ErrFile to fd 2...
	I1004 03:25:25.557961   35894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:25:25.558133   35894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 03:25:25.558358   35894 out.go:352] Setting JSON to false
	I1004 03:25:25.558465   35894 mustload.go:65] Loading cluster: ha-994751
	I1004 03:25:25.558848   35894 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:25:25.558938   35894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:25:25.559125   35894 mustload.go:65] Loading cluster: ha-994751
	I1004 03:25:25.559254   35894 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:25:25.559286   35894 stop.go:39] StopHost: ha-994751-m04
	I1004 03:25:25.559669   35894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:25:25.559714   35894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:25:25.575675   35894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I1004 03:25:25.576226   35894 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:25:25.576799   35894 main.go:141] libmachine: Using API Version  1
	I1004 03:25:25.576823   35894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:25:25.577121   35894 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:25:25.581049   35894 out.go:177] * Stopping node "ha-994751-m04"  ...
	I1004 03:25:25.582954   35894 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1004 03:25:25.583001   35894 main.go:141] libmachine: (ha-994751-m04) Calling .DriverName
	I1004 03:25:25.583355   35894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1004 03:25:25.583391   35894 main.go:141] libmachine: (ha-994751-m04) Calling .GetSSHHostname
	I1004 03:25:25.586776   35894 main.go:141] libmachine: (ha-994751-m04) DBG | domain ha-994751-m04 has defined MAC address 52:54:00:5e:d5:b5 in network mk-ha-994751
	I1004 03:25:25.587278   35894 main.go:141] libmachine: (ha-994751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:b5", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:21:47 +0000 UTC Type:0 Mac:52:54:00:5e:d5:b5 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-994751-m04 Clientid:01:52:54:00:5e:d5:b5}
	I1004 03:25:25.587305   35894 main.go:141] libmachine: (ha-994751-m04) DBG | domain ha-994751-m04 has defined IP address 192.168.39.134 and MAC address 52:54:00:5e:d5:b5 in network mk-ha-994751
	I1004 03:25:25.587475   35894 main.go:141] libmachine: (ha-994751-m04) Calling .GetSSHPort
	I1004 03:25:25.587695   35894 main.go:141] libmachine: (ha-994751-m04) Calling .GetSSHKeyPath
	I1004 03:25:25.587884   35894 main.go:141] libmachine: (ha-994751-m04) Calling .GetSSHUsername
	I1004 03:25:25.588058   35894 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m04/id_rsa Username:docker}
	I1004 03:25:25.676555   35894 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1004 03:25:25.732193   35894 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1004 03:25:25.786876   35894 main.go:141] libmachine: Stopping "ha-994751-m04"...
	I1004 03:25:25.786910   35894 main.go:141] libmachine: (ha-994751-m04) Calling .GetState
	I1004 03:25:25.788579   35894 main.go:141] libmachine: (ha-994751-m04) Calling .Stop
	I1004 03:25:25.792851   35894 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 0/120
	I1004 03:25:27.020480   35894 main.go:141] libmachine: (ha-994751-m04) Calling .GetState
	I1004 03:25:27.021856   35894 main.go:141] libmachine: Machine "ha-994751-m04" was stopped.
	I1004 03:25:27.021888   35894 stop.go:75] duration metric: took 1.438926203s to stop
	I1004 03:25:27.021919   35894 stop.go:39] StopHost: ha-994751-m03
	I1004 03:25:27.022220   35894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:25:27.022286   35894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:25:27.036915   35894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45079
	I1004 03:25:27.037300   35894 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:25:27.037726   35894 main.go:141] libmachine: Using API Version  1
	I1004 03:25:27.037748   35894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:25:27.038062   35894 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:25:27.040349   35894 out.go:177] * Stopping node "ha-994751-m03"  ...
	I1004 03:25:27.041664   35894 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1004 03:25:27.041684   35894 main.go:141] libmachine: (ha-994751-m03) Calling .DriverName
	I1004 03:25:27.041890   35894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1004 03:25:27.041911   35894 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHHostname
	I1004 03:25:27.044809   35894 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:25:27.045266   35894 main.go:141] libmachine: (ha-994751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:76:ea", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:20:19 +0000 UTC Type:0 Mac:52:54:00:49:76:ea Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-994751-m03 Clientid:01:52:54:00:49:76:ea}
	I1004 03:25:27.045302   35894 main.go:141] libmachine: (ha-994751-m03) DBG | domain ha-994751-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:49:76:ea in network mk-ha-994751
	I1004 03:25:27.045434   35894 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHPort
	I1004 03:25:27.045607   35894 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHKeyPath
	I1004 03:25:27.045742   35894 main.go:141] libmachine: (ha-994751-m03) Calling .GetSSHUsername
	I1004 03:25:27.045871   35894 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m03/id_rsa Username:docker}
	I1004 03:25:27.137652   35894 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1004 03:25:27.192542   35894 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1004 03:25:27.248424   35894 main.go:141] libmachine: Stopping "ha-994751-m03"...
	I1004 03:25:27.248455   35894 main.go:141] libmachine: (ha-994751-m03) Calling .GetState
	I1004 03:25:27.250107   35894 main.go:141] libmachine: (ha-994751-m03) Calling .Stop
	I1004 03:25:27.253429   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 0/120
	I1004 03:25:28.254891   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 1/120
	I1004 03:25:29.256314   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 2/120
	I1004 03:25:30.258181   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 3/120
	I1004 03:25:31.259500   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 4/120
	I1004 03:25:32.261285   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 5/120
	I1004 03:25:33.262768   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 6/120
	I1004 03:25:34.264088   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 7/120
	I1004 03:25:35.266168   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 8/120
	I1004 03:25:36.267587   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 9/120
	I1004 03:25:37.270114   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 10/120
	I1004 03:25:38.271765   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 11/120
	I1004 03:25:39.273502   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 12/120
	I1004 03:25:40.275469   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 13/120
	I1004 03:25:41.277158   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 14/120
	I1004 03:25:42.279723   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 15/120
	I1004 03:25:43.281480   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 16/120
	I1004 03:25:44.283014   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 17/120
	I1004 03:25:45.284378   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 18/120
	I1004 03:25:46.285957   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 19/120
	I1004 03:25:47.288105   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 20/120
	I1004 03:25:48.289943   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 21/120
	I1004 03:25:49.292657   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 22/120
	I1004 03:25:50.294135   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 23/120
	I1004 03:25:51.295718   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 24/120
	I1004 03:25:52.297660   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 25/120
	I1004 03:25:53.298996   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 26/120
	I1004 03:25:54.300598   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 27/120
	I1004 03:25:55.301828   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 28/120
	I1004 03:25:56.303410   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 29/120
	I1004 03:25:57.305349   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 30/120
	I1004 03:25:58.306932   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 31/120
	I1004 03:25:59.308139   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 32/120
	I1004 03:26:00.309577   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 33/120
	I1004 03:26:01.310956   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 34/120
	I1004 03:26:02.312776   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 35/120
	I1004 03:26:03.314138   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 36/120
	I1004 03:26:04.315330   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 37/120
	I1004 03:26:05.316812   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 38/120
	I1004 03:26:06.317929   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 39/120
	I1004 03:26:07.319395   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 40/120
	I1004 03:26:08.321004   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 41/120
	I1004 03:26:09.322473   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 42/120
	I1004 03:26:10.323807   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 43/120
	I1004 03:26:11.325268   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 44/120
	I1004 03:26:12.327071   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 45/120
	I1004 03:26:13.329292   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 46/120
	I1004 03:26:14.330647   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 47/120
	I1004 03:26:15.331983   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 48/120
	I1004 03:26:16.333430   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 49/120
	I1004 03:26:17.335258   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 50/120
	I1004 03:26:18.336689   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 51/120
	I1004 03:26:19.338351   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 52/120
	I1004 03:26:20.339835   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 53/120
	I1004 03:26:21.341601   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 54/120
	I1004 03:26:22.343481   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 55/120
	I1004 03:26:23.344986   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 56/120
	I1004 03:26:24.346334   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 57/120
	I1004 03:26:25.348639   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 58/120
	I1004 03:26:26.350175   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 59/120
	I1004 03:26:27.351685   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 60/120
	I1004 03:26:28.353160   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 61/120
	I1004 03:26:29.354616   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 62/120
	I1004 03:26:30.356217   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 63/120
	I1004 03:26:31.357597   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 64/120
	I1004 03:26:32.359047   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 65/120
	I1004 03:26:33.360605   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 66/120
	I1004 03:26:34.361893   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 67/120
	I1004 03:26:35.363270   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 68/120
	I1004 03:26:36.364788   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 69/120
	I1004 03:26:37.366828   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 70/120
	I1004 03:26:38.368903   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 71/120
	I1004 03:26:39.370358   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 72/120
	I1004 03:26:40.371699   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 73/120
	I1004 03:26:41.373342   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 74/120
	I1004 03:26:42.374879   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 75/120
	I1004 03:26:43.376537   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 76/120
	I1004 03:26:44.377865   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 77/120
	I1004 03:26:45.379258   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 78/120
	I1004 03:26:46.380463   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 79/120
	I1004 03:26:47.382225   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 80/120
	I1004 03:26:48.383625   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 81/120
	I1004 03:26:49.385169   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 82/120
	I1004 03:26:50.386452   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 83/120
	I1004 03:26:51.388270   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 84/120
	I1004 03:26:52.390219   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 85/120
	I1004 03:26:53.391649   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 86/120
	I1004 03:26:54.393102   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 87/120
	I1004 03:26:55.394388   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 88/120
	I1004 03:26:56.395734   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 89/120
	I1004 03:26:57.397154   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 90/120
	I1004 03:26:58.398439   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 91/120
	I1004 03:26:59.399846   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 92/120
	I1004 03:27:00.401353   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 93/120
	I1004 03:27:01.402678   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 94/120
	I1004 03:27:02.404407   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 95/120
	I1004 03:27:03.406177   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 96/120
	I1004 03:27:04.407392   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 97/120
	I1004 03:27:05.408698   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 98/120
	I1004 03:27:06.410005   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 99/120
	I1004 03:27:07.412259   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 100/120
	I1004 03:27:08.413586   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 101/120
	I1004 03:27:09.415071   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 102/120
	I1004 03:27:10.416588   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 103/120
	I1004 03:27:11.418361   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 104/120
	I1004 03:27:12.419856   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 105/120
	I1004 03:27:13.421248   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 106/120
	I1004 03:27:14.422733   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 107/120
	I1004 03:27:15.424249   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 108/120
	I1004 03:27:16.426238   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 109/120
	I1004 03:27:17.428099   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 110/120
	I1004 03:27:18.429453   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 111/120
	I1004 03:27:19.430764   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 112/120
	I1004 03:27:20.432097   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 113/120
	I1004 03:27:21.433442   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 114/120
	I1004 03:27:22.435480   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 115/120
	I1004 03:27:23.436949   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 116/120
	I1004 03:27:24.438876   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 117/120
	I1004 03:27:25.440355   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 118/120
	I1004 03:27:26.442205   35894 main.go:141] libmachine: (ha-994751-m03) Waiting for machine to stop 119/120
	I1004 03:27:27.443197   35894 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1004 03:27:27.443261   35894 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 03:27:27.445470   35894 out.go:201] 
	W1004 03:27:27.446851   35894 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1004 03:27:27.446864   35894 out.go:270] * 
	* 
	W1004 03:27:27.449010   35894 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 03:27:27.450697   35894 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-994751 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-994751 --wait=true -v=7 --alsologtostderr
E1004 03:27:42.718447   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:28:32.067116   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-994751 --wait=true -v=7 --alsologtostderr: (3m59.04293938s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-994751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-994751 -n ha-994751
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-994751 logs -n 25: (2.289749112s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m02:/home/docker/cp-test_ha-994751-m03_ha-994751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m02 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04:/home/docker/cp-test_ha-994751-m03_ha-994751-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m04 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp testdata/cp-test.txt                                                | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1363640037/001/cp-test_ha-994751-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751:/home/docker/cp-test_ha-994751-m04_ha-994751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751 sudo cat                                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m02:/home/docker/cp-test_ha-994751-m04_ha-994751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m02 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03:/home/docker/cp-test_ha-994751-m04_ha-994751-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m03 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-994751 node stop m02 -v=7                                                     | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-994751 node start m02 -v=7                                                    | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-994751 -v=7                                                           | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-994751 -v=7                                                                | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-994751 --wait=true -v=7                                                    | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:27 UTC | 04 Oct 24 03:31 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-994751                                                                | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:31 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 03:27:27
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 03:27:27.500451   36399 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:27:27.501055   36399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:27:27.501071   36399 out.go:358] Setting ErrFile to fd 2...
	I1004 03:27:27.501079   36399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:27:27.501357   36399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 03:27:27.501934   36399 out.go:352] Setting JSON to false
	I1004 03:27:27.502911   36399 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4192,"bootTime":1728008255,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 03:27:27.503001   36399 start.go:139] virtualization: kvm guest
	I1004 03:27:27.505363   36399 out.go:177] * [ha-994751] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 03:27:27.506622   36399 notify.go:220] Checking for updates...
	I1004 03:27:27.506672   36399 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:27:27.508073   36399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:27:27.509378   36399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:27:27.510718   36399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:27:27.511948   36399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 03:27:27.513338   36399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:27:27.514960   36399 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:27:27.515046   36399 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:27:27.515471   36399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:27:27.515513   36399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:27:27.531986   36399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I1004 03:27:27.532412   36399 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:27:27.532953   36399 main.go:141] libmachine: Using API Version  1
	I1004 03:27:27.532977   36399 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:27:27.533389   36399 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:27:27.533613   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:27:27.569838   36399 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 03:27:27.571130   36399 start.go:297] selected driver: kvm2
	I1004 03:27:27.571143   36399 start.go:901] validating driver "kvm2" against &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:27:27.571289   36399 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:27:27.571612   36399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:27:27.571692   36399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 03:27:27.586267   36399 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 03:27:27.587046   36399 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:27:27.587095   36399 cni.go:84] Creating CNI manager for ""
	I1004 03:27:27.587148   36399 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1004 03:27:27.587209   36399 start.go:340] cluster config:
	{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:27:27.587330   36399 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:27:27.589250   36399 out.go:177] * Starting "ha-994751" primary control-plane node in "ha-994751" cluster
	I1004 03:27:27.590752   36399 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:27:27.590787   36399 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 03:27:27.590793   36399 cache.go:56] Caching tarball of preloaded images
	I1004 03:27:27.590868   36399 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:27:27.590880   36399 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:27:27.590994   36399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:27:27.591178   36399 start.go:360] acquireMachinesLock for ha-994751: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:27:27.591248   36399 start.go:364] duration metric: took 54.417µs to acquireMachinesLock for "ha-994751"
	I1004 03:27:27.591262   36399 start.go:96] Skipping create...Using existing machine configuration
	I1004 03:27:27.591269   36399 fix.go:54] fixHost starting: 
	I1004 03:27:27.591526   36399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:27:27.591554   36399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:27:27.605468   36399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I1004 03:27:27.605851   36399 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:27:27.606293   36399 main.go:141] libmachine: Using API Version  1
	I1004 03:27:27.606310   36399 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:27:27.606688   36399 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:27:27.606873   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:27:27.607010   36399 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:27:27.608481   36399 fix.go:112] recreateIfNeeded on ha-994751: state=Running err=<nil>
	W1004 03:27:27.608504   36399 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 03:27:27.611329   36399 out.go:177] * Updating the running kvm2 "ha-994751" VM ...
	I1004 03:27:27.612694   36399 machine.go:93] provisionDockerMachine start ...
	I1004 03:27:27.612713   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:27:27.612889   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:27:27.615203   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.615696   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:27:27.615722   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.615826   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:27:27.615965   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:27.616084   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:27.616196   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:27:27.616318   36399 main.go:141] libmachine: Using SSH client type: native
	I1004 03:27:27.616497   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:27:27.616508   36399 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 03:27:27.721174   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751
	
	I1004 03:27:27.721201   36399 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:27:27.721452   36399 buildroot.go:166] provisioning hostname "ha-994751"
	I1004 03:27:27.721482   36399 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:27:27.721695   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:27:27.724556   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.724949   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:27:27.724978   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.725103   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:27:27.725293   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:27.725408   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:27.725564   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:27:27.725710   36399 main.go:141] libmachine: Using SSH client type: native
	I1004 03:27:27.725930   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:27:27.725961   36399 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-994751 && echo "ha-994751" | sudo tee /etc/hostname
	I1004 03:27:27.843801   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751
	
	I1004 03:27:27.843833   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:27:27.846411   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.846748   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:27:27.846774   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.846957   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:27:27.847141   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:27.847532   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:27.847677   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:27:27.847850   36399 main.go:141] libmachine: Using SSH client type: native
	I1004 03:27:27.848018   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:27:27.848034   36399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-994751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-994751/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-994751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:27:27.948554   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:27:27.948580   36399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:27:27.948594   36399 buildroot.go:174] setting up certificates
	I1004 03:27:27.948606   36399 provision.go:84] configureAuth start
	I1004 03:27:27.948617   36399 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:27:27.948905   36399 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:27:27.951371   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.951747   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:27:27.951771   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.951931   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:27:27.954165   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.954529   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:27:27.954566   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.954702   36399 provision.go:143] copyHostCerts
	I1004 03:27:27.954729   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:27:27.954778   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:27:27.954791   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:27:27.954873   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:27:27.954982   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:27:27.955008   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:27:27.955017   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:27:27.955053   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:27:27.955167   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:27:27.955190   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:27:27.955200   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:27:27.955240   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:27:27.955319   36399 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.ha-994751 san=[127.0.0.1 192.168.39.65 ha-994751 localhost minikube]
	I1004 03:27:28.427197   36399 provision.go:177] copyRemoteCerts
	I1004 03:27:28.427256   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:27:28.427288   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:27:28.430042   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:28.430393   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:27:28.430419   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:28.430629   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:27:28.430819   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:28.430963   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:27:28.431175   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:27:28.512533   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:27:28.512643   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:27:28.542281   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:27:28.542347   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1004 03:27:28.570001   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:27:28.570067   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 03:27:28.597404   36399 provision.go:87] duration metric: took 648.786199ms to configureAuth
	I1004 03:27:28.597437   36399 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:27:28.597690   36399 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:27:28.597772   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:27:28.600512   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:28.600920   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:27:28.600946   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:28.601094   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:27:28.601244   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:28.601390   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:28.601519   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:27:28.601695   36399 main.go:141] libmachine: Using SSH client type: native
	I1004 03:27:28.601871   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:27:28.601887   36399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:28:59.506730   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:28:59.506758   36399 machine.go:96] duration metric: took 1m31.894051445s to provisionDockerMachine
	I1004 03:28:59.506770   36399 start.go:293] postStartSetup for "ha-994751" (driver="kvm2")
	I1004 03:28:59.506781   36399 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:28:59.506796   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:28:59.507127   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:28:59.507154   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:28:59.510821   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.511256   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:28:59.511282   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.511516   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:28:59.511718   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:28:59.511911   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:28:59.512081   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:28:59.595666   36399 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:28:59.600058   36399 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:28:59.600085   36399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:28:59.600160   36399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:28:59.600407   36399 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:28:59.600426   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:28:59.600525   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:28:59.610733   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:28:59.635865   36399 start.go:296] duration metric: took 129.080659ms for postStartSetup
	I1004 03:28:59.635912   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:28:59.636222   36399 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1004 03:28:59.636251   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:28:59.639189   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.639630   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:28:59.639653   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.639829   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:28:59.640075   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:28:59.640222   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:28:59.640444   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	W1004 03:28:59.718167   36399 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1004 03:28:59.718191   36399 fix.go:56] duration metric: took 1m32.126921833s for fixHost
	I1004 03:28:59.718212   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:28:59.721053   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.721384   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:28:59.721412   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.721613   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:28:59.721799   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:28:59.721956   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:28:59.722071   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:28:59.722213   36399 main.go:141] libmachine: Using SSH client type: native
	I1004 03:28:59.722388   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:28:59.722408   36399 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:28:59.824955   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728012539.791372498
	
	I1004 03:28:59.824977   36399 fix.go:216] guest clock: 1728012539.791372498
	I1004 03:28:59.824985   36399 fix.go:229] Guest: 2024-10-04 03:28:59.791372498 +0000 UTC Remote: 2024-10-04 03:28:59.718196968 +0000 UTC m=+92.257333562 (delta=73.17553ms)
	I1004 03:28:59.825023   36399 fix.go:200] guest clock delta is within tolerance: 73.17553ms
	I1004 03:28:59.825030   36399 start.go:83] releasing machines lock for "ha-994751", held for 1m32.23377146s
	I1004 03:28:59.825058   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:28:59.825327   36399 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:28:59.827829   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.828233   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:28:59.828264   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.828438   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:28:59.828987   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:28:59.829220   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:28:59.829302   36399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:28:59.829345   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:28:59.829406   36399 ssh_runner.go:195] Run: cat /version.json
	I1004 03:28:59.829429   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:28:59.831860   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.832136   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:28:59.832162   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.832188   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.832294   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:28:59.832483   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:28:59.832613   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:28:59.832634   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.832636   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:28:59.832762   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:28:59.832766   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:28:59.832896   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:28:59.833037   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:28:59.833168   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:28:59.905472   36399 ssh_runner.go:195] Run: systemctl --version
	I1004 03:28:59.931412   36399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:29:00.096371   36399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 03:29:00.102947   36399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:29:00.102997   36399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:29:00.113007   36399 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1004 03:29:00.113024   36399 start.go:495] detecting cgroup driver to use...
	I1004 03:29:00.113073   36399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:29:00.129940   36399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:29:00.144302   36399 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:29:00.144353   36399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:29:00.158853   36399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:29:00.173169   36399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:29:00.319271   36399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:29:00.465231   36399 docker.go:233] disabling docker service ...
	I1004 03:29:00.465294   36399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:29:00.482574   36399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:29:00.496773   36399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:29:00.649921   36399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:29:00.794247   36399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:29:00.809766   36399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:29:00.829596   36399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:29:00.829669   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:29:00.840333   36399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:29:00.840396   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:29:00.851149   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:29:00.862074   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:29:00.872829   36399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:29:00.883727   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:29:00.894679   36399 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:29:00.907370   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:29:00.919075   36399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:29:00.929727   36399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:29:00.939387   36399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:29:01.092711   36399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:29:07.693226   36399 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.600478103s)
	I1004 03:29:07.693266   36399 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:29:07.693318   36399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:29:07.699341   36399 start.go:563] Will wait 60s for crictl version
	I1004 03:29:07.699402   36399 ssh_runner.go:195] Run: which crictl
	I1004 03:29:07.703491   36399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:29:07.745310   36399 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:29:07.745393   36399 ssh_runner.go:195] Run: crio --version
	I1004 03:29:07.775268   36399 ssh_runner.go:195] Run: crio --version
	I1004 03:29:07.809105   36399 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:29:07.810570   36399 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:29:07.813456   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:29:07.813811   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:29:07.813835   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:29:07.814065   36399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:29:07.819175   36399 kubeadm.go:883] updating cluster {Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 03:29:07.819312   36399 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:29:07.819355   36399 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:29:07.866524   36399 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:29:07.866546   36399 crio.go:433] Images already preloaded, skipping extraction
	I1004 03:29:07.866589   36399 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:29:07.903658   36399 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:29:07.903685   36399 cache_images.go:84] Images are preloaded, skipping loading
	I1004 03:29:07.903695   36399 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.31.1 crio true true} ...
	I1004 03:29:07.903825   36399 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-994751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:29:07.903906   36399 ssh_runner.go:195] Run: crio config
	I1004 03:29:07.955886   36399 cni.go:84] Creating CNI manager for ""
	I1004 03:29:07.955906   36399 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1004 03:29:07.955914   36399 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 03:29:07.955941   36399 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-994751 NodeName:ha-994751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 03:29:07.956099   36399 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-994751"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 03:29:07.956123   36399 kube-vip.go:115] generating kube-vip config ...
	I1004 03:29:07.956170   36399 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1004 03:29:07.968162   36399 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1004 03:29:07.968265   36399 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:29:07.968315   36399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:29:07.978369   36399 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 03:29:07.978437   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1004 03:29:07.988393   36399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1004 03:29:08.005411   36399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:29:08.022174   36399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1004 03:29:08.039625   36399 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1004 03:29:08.059625   36399 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:29:08.063561   36399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:29:08.211662   36399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:29:08.228351   36399 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751 for IP: 192.168.39.65
	I1004 03:29:08.228374   36399 certs.go:194] generating shared ca certs ...
	I1004 03:29:08.228394   36399 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:29:08.228529   36399 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:29:08.228576   36399 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:29:08.228585   36399 certs.go:256] generating profile certs ...
	I1004 03:29:08.228660   36399 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key
	I1004 03:29:08.228685   36399 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.b5b33258
	I1004 03:29:08.228703   36399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.b5b33258 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65 192.168.39.117 192.168.39.53 192.168.39.254]
	I1004 03:29:08.351888   36399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.b5b33258 ...
	I1004 03:29:08.351919   36399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.b5b33258: {Name:mk0da6460470d3bf380479e3c5bb84dcbb5a8d25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:29:08.352090   36399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.b5b33258 ...
	I1004 03:29:08.352102   36399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.b5b33258: {Name:mka28539f0ab48ed69b4c4b2556a682cc04c0cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:29:08.352167   36399 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.b5b33258 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt
	I1004 03:29:08.352310   36399 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.b5b33258 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key
	I1004 03:29:08.352434   36399 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key
	I1004 03:29:08.352448   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:29:08.352461   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:29:08.352471   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:29:08.352484   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:29:08.352496   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:29:08.352509   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:29:08.352519   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:29:08.352530   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:29:08.352580   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:29:08.352607   36399 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:29:08.352614   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:29:08.352636   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:29:08.352653   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:29:08.352671   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:29:08.352710   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:29:08.352735   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:29:08.352749   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:29:08.352761   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:29:08.353310   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:29:08.379008   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:29:08.403771   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:29:08.428658   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:29:08.453952   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1004 03:29:08.478680   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 03:29:08.502402   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:29:08.527438   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:29:08.552971   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:29:08.586986   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:29:08.681081   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:29:08.716489   36399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 03:29:08.738890   36399 ssh_runner.go:195] Run: openssl version
	I1004 03:29:08.745964   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:29:08.759584   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:29:08.764568   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:29:08.764617   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:29:08.771773   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:29:08.783960   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:29:08.802441   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:29:08.807151   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:29:08.807208   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:29:08.813381   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:29:08.823733   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:29:08.842189   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:29:08.852026   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:29:08.852076   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:29:08.859189   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:29:08.872677   36399 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:29:08.879128   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 03:29:08.885847   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 03:29:08.896770   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 03:29:08.904993   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 03:29:08.911769   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 03:29:08.917988   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 03:29:08.927848   36399 kubeadm.go:392] StartCluster: {Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:29:08.927945   36399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 03:29:08.928017   36399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 03:29:08.971398   36399 cri.go:89] found id: "881d54dbcaba719e0166cbf5d8dde7c2cbf92e158ad52bee95e5bd0bac99bac1"
	I1004 03:29:08.971421   36399 cri.go:89] found id: "90728185b95ff345253f97ee7d081a4368d8d3aeb2772fd30044e644b0f79cbe"
	I1004 03:29:08.971426   36399 cri.go:89] found id: "f51ed6216df05368ed6eb52233d1c1286dc4bc22b108acfc5574bdcc5166be94"
	I1004 03:29:08.971431   36399 cri.go:89] found id: "4384dc3c315856ea3eb68d629d7d4c60baaa139d03b85d06551d92212d953265"
	I1004 03:29:08.971435   36399 cri.go:89] found id: "e49637421f2a08b39b3da14ab0afe69cec5437121716c2a15bd1721e2f3947d8"
	I1004 03:29:08.971442   36399 cri.go:89] found id: "eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586"
	I1004 03:29:08.971445   36399 cri.go:89] found id: "93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd"
	I1004 03:29:08.971448   36399 cri.go:89] found id: "6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99"
	I1004 03:29:08.971451   36399 cri.go:89] found id: "731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160"
	I1004 03:29:08.971456   36399 cri.go:89] found id: "8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f"
	I1004 03:29:08.971459   36399 cri.go:89] found id: "e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec"
	I1004 03:29:08.971462   36399 cri.go:89] found id: "f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec"
	I1004 03:29:08.971466   36399 cri.go:89] found id: "849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe"
	I1004 03:29:08.971471   36399 cri.go:89] found id: "f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8"
	I1004 03:29:08.971483   36399 cri.go:89] found id: ""
	I1004 03:29:08.971534   36399 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-994751 -n ha-994751
helpers_test.go:261: (dbg) Run:  kubectl --context ha-994751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (363.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 stop -v=7 --alsologtostderr
E1004 03:32:08.993935   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:32:15.014816   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-994751 stop -v=7 --alsologtostderr: exit status 82 (2m0.448520716s)

                                                
                                                
-- stdout --
	* Stopping node "ha-994751-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:31:46.635079   38185 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:31:46.635202   38185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:31:46.635213   38185 out.go:358] Setting ErrFile to fd 2...
	I1004 03:31:46.635220   38185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:31:46.635412   38185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 03:31:46.635684   38185 out.go:352] Setting JSON to false
	I1004 03:31:46.635807   38185 mustload.go:65] Loading cluster: ha-994751
	I1004 03:31:46.636276   38185 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:31:46.636409   38185 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:31:46.636648   38185 mustload.go:65] Loading cluster: ha-994751
	I1004 03:31:46.636826   38185 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:31:46.636860   38185 stop.go:39] StopHost: ha-994751-m04
	I1004 03:31:46.637396   38185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:31:46.637453   38185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:31:46.652314   38185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I1004 03:31:46.652752   38185 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:31:46.653438   38185 main.go:141] libmachine: Using API Version  1
	I1004 03:31:46.653470   38185 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:31:46.653824   38185 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:31:46.655904   38185 out.go:177] * Stopping node "ha-994751-m04"  ...
	I1004 03:31:46.657111   38185 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1004 03:31:46.657134   38185 main.go:141] libmachine: (ha-994751-m04) Calling .DriverName
	I1004 03:31:46.657305   38185 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1004 03:31:46.657328   38185 main.go:141] libmachine: (ha-994751-m04) Calling .GetSSHHostname
	I1004 03:31:46.660338   38185 main.go:141] libmachine: (ha-994751-m04) DBG | domain ha-994751-m04 has defined MAC address 52:54:00:5e:d5:b5 in network mk-ha-994751
	I1004 03:31:46.660863   38185 main.go:141] libmachine: (ha-994751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:d5:b5", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:31:15 +0000 UTC Type:0 Mac:52:54:00:5e:d5:b5 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-994751-m04 Clientid:01:52:54:00:5e:d5:b5}
	I1004 03:31:46.660899   38185 main.go:141] libmachine: (ha-994751-m04) DBG | domain ha-994751-m04 has defined IP address 192.168.39.134 and MAC address 52:54:00:5e:d5:b5 in network mk-ha-994751
	I1004 03:31:46.661023   38185 main.go:141] libmachine: (ha-994751-m04) Calling .GetSSHPort
	I1004 03:31:46.661215   38185 main.go:141] libmachine: (ha-994751-m04) Calling .GetSSHKeyPath
	I1004 03:31:46.661360   38185 main.go:141] libmachine: (ha-994751-m04) Calling .GetSSHUsername
	I1004 03:31:46.661485   38185 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751-m04/id_rsa Username:docker}
	I1004 03:31:46.742801   38185 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1004 03:31:46.794907   38185 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1004 03:31:46.847894   38185 main.go:141] libmachine: Stopping "ha-994751-m04"...
	I1004 03:31:46.847965   38185 main.go:141] libmachine: (ha-994751-m04) Calling .GetState
	I1004 03:31:46.850072   38185 main.go:141] libmachine: (ha-994751-m04) Calling .Stop
	I1004 03:31:46.853955   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 0/120
	I1004 03:31:47.855513   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 1/120
	I1004 03:31:48.857331   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 2/120
	I1004 03:31:49.858714   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 3/120
	I1004 03:31:50.859945   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 4/120
	I1004 03:31:51.861678   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 5/120
	I1004 03:31:52.862902   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 6/120
	I1004 03:31:53.864320   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 7/120
	I1004 03:31:54.866155   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 8/120
	I1004 03:31:55.867631   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 9/120
	I1004 03:31:56.869491   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 10/120
	I1004 03:31:57.870730   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 11/120
	I1004 03:31:58.872431   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 12/120
	I1004 03:31:59.874097   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 13/120
	I1004 03:32:00.875284   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 14/120
	I1004 03:32:01.876956   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 15/120
	I1004 03:32:02.878342   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 16/120
	I1004 03:32:03.879589   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 17/120
	I1004 03:32:04.881052   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 18/120
	I1004 03:32:05.882156   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 19/120
	I1004 03:32:06.884100   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 20/120
	I1004 03:32:07.885291   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 21/120
	I1004 03:32:08.886429   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 22/120
	I1004 03:32:09.887663   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 23/120
	I1004 03:32:10.888779   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 24/120
	I1004 03:32:11.890381   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 25/120
	I1004 03:32:12.891726   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 26/120
	I1004 03:32:13.893083   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 27/120
	I1004 03:32:14.894311   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 28/120
	I1004 03:32:15.895480   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 29/120
	I1004 03:32:16.897338   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 30/120
	I1004 03:32:17.898579   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 31/120
	I1004 03:32:18.900309   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 32/120
	I1004 03:32:19.902149   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 33/120
	I1004 03:32:20.903669   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 34/120
	I1004 03:32:21.905350   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 35/120
	I1004 03:32:22.906961   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 36/120
	I1004 03:32:23.908242   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 37/120
	I1004 03:32:24.909507   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 38/120
	I1004 03:32:25.910707   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 39/120
	I1004 03:32:26.912922   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 40/120
	I1004 03:32:27.914116   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 41/120
	I1004 03:32:28.915383   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 42/120
	I1004 03:32:29.916834   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 43/120
	I1004 03:32:30.918386   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 44/120
	I1004 03:32:31.920347   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 45/120
	I1004 03:32:32.921584   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 46/120
	I1004 03:32:33.922732   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 47/120
	I1004 03:32:34.924077   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 48/120
	I1004 03:32:35.926166   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 49/120
	I1004 03:32:36.927968   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 50/120
	I1004 03:32:37.929210   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 51/120
	I1004 03:32:38.930730   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 52/120
	I1004 03:32:39.932165   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 53/120
	I1004 03:32:40.934371   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 54/120
	I1004 03:32:41.936287   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 55/120
	I1004 03:32:42.937770   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 56/120
	I1004 03:32:43.938882   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 57/120
	I1004 03:32:44.940380   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 58/120
	I1004 03:32:45.942163   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 59/120
	I1004 03:32:46.944150   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 60/120
	I1004 03:32:47.946020   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 61/120
	I1004 03:32:48.947392   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 62/120
	I1004 03:32:49.948899   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 63/120
	I1004 03:32:50.950282   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 64/120
	I1004 03:32:51.952328   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 65/120
	I1004 03:32:52.953577   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 66/120
	I1004 03:32:53.955108   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 67/120
	I1004 03:32:54.956489   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 68/120
	I1004 03:32:55.958246   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 69/120
	I1004 03:32:56.960504   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 70/120
	I1004 03:32:57.962467   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 71/120
	I1004 03:32:58.963595   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 72/120
	I1004 03:32:59.964882   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 73/120
	I1004 03:33:00.966255   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 74/120
	I1004 03:33:01.968728   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 75/120
	I1004 03:33:02.970281   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 76/120
	I1004 03:33:03.971601   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 77/120
	I1004 03:33:04.972940   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 78/120
	I1004 03:33:05.974070   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 79/120
	I1004 03:33:06.976089   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 80/120
	I1004 03:33:07.977376   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 81/120
	I1004 03:33:08.978555   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 82/120
	I1004 03:33:09.979776   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 83/120
	I1004 03:33:10.981060   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 84/120
	I1004 03:33:11.983200   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 85/120
	I1004 03:33:12.984390   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 86/120
	I1004 03:33:13.985867   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 87/120
	I1004 03:33:14.987207   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 88/120
	I1004 03:33:15.988523   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 89/120
	I1004 03:33:16.990512   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 90/120
	I1004 03:33:17.991935   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 91/120
	I1004 03:33:18.993116   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 92/120
	I1004 03:33:19.994428   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 93/120
	I1004 03:33:20.995823   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 94/120
	I1004 03:33:21.997673   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 95/120
	I1004 03:33:22.999886   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 96/120
	I1004 03:33:24.001314   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 97/120
	I1004 03:33:25.002585   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 98/120
	I1004 03:33:26.004141   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 99/120
	I1004 03:33:27.005843   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 100/120
	I1004 03:33:28.007148   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 101/120
	I1004 03:33:29.008802   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 102/120
	I1004 03:33:30.010008   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 103/120
	I1004 03:33:31.011507   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 104/120
	I1004 03:33:32.013402   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 105/120
	I1004 03:33:33.014786   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 106/120
	I1004 03:33:34.015950   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 107/120
	I1004 03:33:35.017157   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 108/120
	I1004 03:33:36.018557   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 109/120
	I1004 03:33:37.020714   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 110/120
	I1004 03:33:38.022627   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 111/120
	I1004 03:33:39.024183   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 112/120
	I1004 03:33:40.025376   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 113/120
	I1004 03:33:41.026885   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 114/120
	I1004 03:33:42.028923   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 115/120
	I1004 03:33:43.030206   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 116/120
	I1004 03:33:44.031525   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 117/120
	I1004 03:33:45.032809   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 118/120
	I1004 03:33:46.034136   38185 main.go:141] libmachine: (ha-994751-m04) Waiting for machine to stop 119/120
	I1004 03:33:47.035272   38185 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1004 03:33:47.035328   38185 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 03:33:47.037229   38185 out.go:201] 
	W1004 03:33:47.038522   38185 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1004 03:33:47.038537   38185 out.go:270] * 
	* 
	W1004 03:33:47.040616   38185 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 03:33:47.041818   38185 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-994751 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr: (18.848498277s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-994751 -n ha-994751
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-994751 logs -n 25: (1.969892258s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-994751 ssh -n ha-994751-m02 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04:/home/docker/cp-test_ha-994751-m03_ha-994751-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m04 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m03_ha-994751-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp testdata/cp-test.txt                                                | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1363640037/001/cp-test_ha-994751-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751:/home/docker/cp-test_ha-994751-m04_ha-994751.txt                       |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751 sudo cat                                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751.txt                                 |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m02:/home/docker/cp-test_ha-994751-m04_ha-994751-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m02 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m03:/home/docker/cp-test_ha-994751-m04_ha-994751-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n                                                                 | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | ha-994751-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-994751 ssh -n ha-994751-m03 sudo cat                                          | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC | 04 Oct 24 03:22 UTC |
	|         | /home/docker/cp-test_ha-994751-m04_ha-994751-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-994751 node stop m02 -v=7                                                     | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-994751 node start m02 -v=7                                                    | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-994751 -v=7                                                           | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-994751 -v=7                                                                | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-994751 --wait=true -v=7                                                    | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:27 UTC | 04 Oct 24 03:31 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-994751                                                                | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:31 UTC |                     |
	| node    | ha-994751 node delete m03 -v=7                                                   | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:31 UTC | 04 Oct 24 03:31 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-994751 stop -v=7                                                              | ha-994751 | jenkins | v1.34.0 | 04 Oct 24 03:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 03:27:27
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 03:27:27.500451   36399 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:27:27.501055   36399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:27:27.501071   36399 out.go:358] Setting ErrFile to fd 2...
	I1004 03:27:27.501079   36399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:27:27.501357   36399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 03:27:27.501934   36399 out.go:352] Setting JSON to false
	I1004 03:27:27.502911   36399 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4192,"bootTime":1728008255,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 03:27:27.503001   36399 start.go:139] virtualization: kvm guest
	I1004 03:27:27.505363   36399 out.go:177] * [ha-994751] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 03:27:27.506622   36399 notify.go:220] Checking for updates...
	I1004 03:27:27.506672   36399 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:27:27.508073   36399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:27:27.509378   36399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:27:27.510718   36399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:27:27.511948   36399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 03:27:27.513338   36399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:27:27.514960   36399 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:27:27.515046   36399 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:27:27.515471   36399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:27:27.515513   36399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:27:27.531986   36399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I1004 03:27:27.532412   36399 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:27:27.532953   36399 main.go:141] libmachine: Using API Version  1
	I1004 03:27:27.532977   36399 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:27:27.533389   36399 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:27:27.533613   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:27:27.569838   36399 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 03:27:27.571130   36399 start.go:297] selected driver: kvm2
	I1004 03:27:27.571143   36399 start.go:901] validating driver "kvm2" against &{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:27:27.571289   36399 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:27:27.571612   36399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:27:27.571692   36399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 03:27:27.586267   36399 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 03:27:27.587046   36399 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:27:27.587095   36399 cni.go:84] Creating CNI manager for ""
	I1004 03:27:27.587148   36399 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1004 03:27:27.587209   36399 start.go:340] cluster config:
	{Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:27:27.587330   36399 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:27:27.589250   36399 out.go:177] * Starting "ha-994751" primary control-plane node in "ha-994751" cluster
	I1004 03:27:27.590752   36399 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:27:27.590787   36399 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 03:27:27.590793   36399 cache.go:56] Caching tarball of preloaded images
	I1004 03:27:27.590868   36399 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:27:27.590880   36399 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:27:27.590994   36399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/config.json ...
	I1004 03:27:27.591178   36399 start.go:360] acquireMachinesLock for ha-994751: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:27:27.591248   36399 start.go:364] duration metric: took 54.417µs to acquireMachinesLock for "ha-994751"
	I1004 03:27:27.591262   36399 start.go:96] Skipping create...Using existing machine configuration
	I1004 03:27:27.591269   36399 fix.go:54] fixHost starting: 
	I1004 03:27:27.591526   36399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:27:27.591554   36399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:27:27.605468   36399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I1004 03:27:27.605851   36399 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:27:27.606293   36399 main.go:141] libmachine: Using API Version  1
	I1004 03:27:27.606310   36399 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:27:27.606688   36399 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:27:27.606873   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:27:27.607010   36399 main.go:141] libmachine: (ha-994751) Calling .GetState
	I1004 03:27:27.608481   36399 fix.go:112] recreateIfNeeded on ha-994751: state=Running err=<nil>
	W1004 03:27:27.608504   36399 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 03:27:27.611329   36399 out.go:177] * Updating the running kvm2 "ha-994751" VM ...
	I1004 03:27:27.612694   36399 machine.go:93] provisionDockerMachine start ...
	I1004 03:27:27.612713   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:27:27.612889   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:27:27.615203   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.615696   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:27:27.615722   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.615826   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:27:27.615965   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:27.616084   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:27.616196   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:27:27.616318   36399 main.go:141] libmachine: Using SSH client type: native
	I1004 03:27:27.616497   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:27:27.616508   36399 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 03:27:27.721174   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751
	
	I1004 03:27:27.721201   36399 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:27:27.721452   36399 buildroot.go:166] provisioning hostname "ha-994751"
	I1004 03:27:27.721482   36399 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:27:27.721695   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:27:27.724556   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.724949   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:27:27.724978   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.725103   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:27:27.725293   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:27.725408   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:27.725564   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:27:27.725710   36399 main.go:141] libmachine: Using SSH client type: native
	I1004 03:27:27.725930   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:27:27.725961   36399 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-994751 && echo "ha-994751" | sudo tee /etc/hostname
	I1004 03:27:27.843801   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-994751
	
	I1004 03:27:27.843833   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:27:27.846411   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.846748   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:27:27.846774   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.846957   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:27:27.847141   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:27.847532   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:27.847677   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:27:27.847850   36399 main.go:141] libmachine: Using SSH client type: native
	I1004 03:27:27.848018   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:27:27.848034   36399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-994751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-994751/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-994751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:27:27.948554   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:27:27.948580   36399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:27:27.948594   36399 buildroot.go:174] setting up certificates
	I1004 03:27:27.948606   36399 provision.go:84] configureAuth start
	I1004 03:27:27.948617   36399 main.go:141] libmachine: (ha-994751) Calling .GetMachineName
	I1004 03:27:27.948905   36399 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:27:27.951371   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.951747   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:27:27.951771   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.951931   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:27:27.954165   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.954529   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:27:27.954566   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:27.954702   36399 provision.go:143] copyHostCerts
	I1004 03:27:27.954729   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:27:27.954778   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:27:27.954791   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:27:27.954873   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:27:27.954982   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:27:27.955008   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:27:27.955017   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:27:27.955053   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:27:27.955167   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:27:27.955190   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:27:27.955200   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:27:27.955240   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:27:27.955319   36399 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.ha-994751 san=[127.0.0.1 192.168.39.65 ha-994751 localhost minikube]
	I1004 03:27:28.427197   36399 provision.go:177] copyRemoteCerts
	I1004 03:27:28.427256   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:27:28.427288   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:27:28.430042   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:28.430393   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:27:28.430419   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:28.430629   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:27:28.430819   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:28.430963   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:27:28.431175   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:27:28.512533   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:27:28.512643   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:27:28.542281   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:27:28.542347   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1004 03:27:28.570001   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:27:28.570067   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 03:27:28.597404   36399 provision.go:87] duration metric: took 648.786199ms to configureAuth
	I1004 03:27:28.597437   36399 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:27:28.597690   36399 config.go:182] Loaded profile config "ha-994751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:27:28.597772   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:27:28.600512   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:28.600920   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:27:28.600946   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:27:28.601094   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:27:28.601244   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:28.601390   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:27:28.601519   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:27:28.601695   36399 main.go:141] libmachine: Using SSH client type: native
	I1004 03:27:28.601871   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:27:28.601887   36399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:28:59.506730   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:28:59.506758   36399 machine.go:96] duration metric: took 1m31.894051445s to provisionDockerMachine
	I1004 03:28:59.506770   36399 start.go:293] postStartSetup for "ha-994751" (driver="kvm2")
	I1004 03:28:59.506781   36399 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:28:59.506796   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:28:59.507127   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:28:59.507154   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:28:59.510821   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.511256   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:28:59.511282   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.511516   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:28:59.511718   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:28:59.511911   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:28:59.512081   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:28:59.595666   36399 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:28:59.600058   36399 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:28:59.600085   36399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:28:59.600160   36399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:28:59.600407   36399 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:28:59.600426   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:28:59.600525   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:28:59.610733   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:28:59.635865   36399 start.go:296] duration metric: took 129.080659ms for postStartSetup
	I1004 03:28:59.635912   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:28:59.636222   36399 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1004 03:28:59.636251   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:28:59.639189   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.639630   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:28:59.639653   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.639829   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:28:59.640075   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:28:59.640222   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:28:59.640444   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	W1004 03:28:59.718167   36399 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1004 03:28:59.718191   36399 fix.go:56] duration metric: took 1m32.126921833s for fixHost
	I1004 03:28:59.718212   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:28:59.721053   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.721384   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:28:59.721412   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.721613   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:28:59.721799   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:28:59.721956   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:28:59.722071   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:28:59.722213   36399 main.go:141] libmachine: Using SSH client type: native
	I1004 03:28:59.722388   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1004 03:28:59.722408   36399 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:28:59.824955   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728012539.791372498
	
	I1004 03:28:59.824977   36399 fix.go:216] guest clock: 1728012539.791372498
	I1004 03:28:59.824985   36399 fix.go:229] Guest: 2024-10-04 03:28:59.791372498 +0000 UTC Remote: 2024-10-04 03:28:59.718196968 +0000 UTC m=+92.257333562 (delta=73.17553ms)
	I1004 03:28:59.825023   36399 fix.go:200] guest clock delta is within tolerance: 73.17553ms
	I1004 03:28:59.825030   36399 start.go:83] releasing machines lock for "ha-994751", held for 1m32.23377146s
	I1004 03:28:59.825058   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:28:59.825327   36399 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:28:59.827829   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.828233   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:28:59.828264   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.828438   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:28:59.828987   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:28:59.829220   36399 main.go:141] libmachine: (ha-994751) Calling .DriverName
	I1004 03:28:59.829302   36399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:28:59.829345   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:28:59.829406   36399 ssh_runner.go:195] Run: cat /version.json
	I1004 03:28:59.829429   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHHostname
	I1004 03:28:59.831860   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.832136   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:28:59.832162   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.832188   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.832294   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:28:59.832483   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:28:59.832613   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:28:59.832634   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:28:59.832636   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:28:59.832762   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHPort
	I1004 03:28:59.832766   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:28:59.832896   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHKeyPath
	I1004 03:28:59.833037   36399 main.go:141] libmachine: (ha-994751) Calling .GetSSHUsername
	I1004 03:28:59.833168   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/ha-994751/id_rsa Username:docker}
	I1004 03:28:59.905472   36399 ssh_runner.go:195] Run: systemctl --version
	I1004 03:28:59.931412   36399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:29:00.096371   36399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 03:29:00.102947   36399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:29:00.102997   36399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:29:00.113007   36399 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1004 03:29:00.113024   36399 start.go:495] detecting cgroup driver to use...
	I1004 03:29:00.113073   36399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:29:00.129940   36399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:29:00.144302   36399 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:29:00.144353   36399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:29:00.158853   36399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:29:00.173169   36399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:29:00.319271   36399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:29:00.465231   36399 docker.go:233] disabling docker service ...
	I1004 03:29:00.465294   36399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:29:00.482574   36399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:29:00.496773   36399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:29:00.649921   36399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:29:00.794247   36399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:29:00.809766   36399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:29:00.829596   36399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:29:00.829669   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:29:00.840333   36399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:29:00.840396   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:29:00.851149   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:29:00.862074   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:29:00.872829   36399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:29:00.883727   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:29:00.894679   36399 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:29:00.907370   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:29:00.919075   36399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:29:00.929727   36399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:29:00.939387   36399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:29:01.092711   36399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:29:07.693226   36399 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.600478103s)
	I1004 03:29:07.693266   36399 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:29:07.693318   36399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:29:07.699341   36399 start.go:563] Will wait 60s for crictl version
	I1004 03:29:07.699402   36399 ssh_runner.go:195] Run: which crictl
	I1004 03:29:07.703491   36399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:29:07.745310   36399 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:29:07.745393   36399 ssh_runner.go:195] Run: crio --version
	I1004 03:29:07.775268   36399 ssh_runner.go:195] Run: crio --version
	I1004 03:29:07.809105   36399 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:29:07.810570   36399 main.go:141] libmachine: (ha-994751) Calling .GetIP
	I1004 03:29:07.813456   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:29:07.813811   36399 main.go:141] libmachine: (ha-994751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:b2:a8", ip: ""} in network mk-ha-994751: {Iface:virbr1 ExpiryTime:2024-10-04 04:18:20 +0000 UTC Type:0 Mac:52:54:00:9b:b2:a8 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-994751 Clientid:01:52:54:00:9b:b2:a8}
	I1004 03:29:07.813835   36399 main.go:141] libmachine: (ha-994751) DBG | domain ha-994751 has defined IP address 192.168.39.65 and MAC address 52:54:00:9b:b2:a8 in network mk-ha-994751
	I1004 03:29:07.814065   36399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:29:07.819175   36399 kubeadm.go:883] updating cluster {Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 03:29:07.819312   36399 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:29:07.819355   36399 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:29:07.866524   36399 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:29:07.866546   36399 crio.go:433] Images already preloaded, skipping extraction
	I1004 03:29:07.866589   36399 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:29:07.903658   36399 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:29:07.903685   36399 cache_images.go:84] Images are preloaded, skipping loading
	I1004 03:29:07.903695   36399 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.31.1 crio true true} ...
	I1004 03:29:07.903825   36399 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-994751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:29:07.903906   36399 ssh_runner.go:195] Run: crio config
	I1004 03:29:07.955886   36399 cni.go:84] Creating CNI manager for ""
	I1004 03:29:07.955906   36399 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1004 03:29:07.955914   36399 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 03:29:07.955941   36399 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-994751 NodeName:ha-994751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 03:29:07.956099   36399 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-994751"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 03:29:07.956123   36399 kube-vip.go:115] generating kube-vip config ...
	I1004 03:29:07.956170   36399 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1004 03:29:07.968162   36399 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1004 03:29:07.968265   36399 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1004 03:29:07.968315   36399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:29:07.978369   36399 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 03:29:07.978437   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1004 03:29:07.988393   36399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1004 03:29:08.005411   36399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:29:08.022174   36399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1004 03:29:08.039625   36399 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1004 03:29:08.059625   36399 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1004 03:29:08.063561   36399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:29:08.211662   36399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:29:08.228351   36399 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751 for IP: 192.168.39.65
	I1004 03:29:08.228374   36399 certs.go:194] generating shared ca certs ...
	I1004 03:29:08.228394   36399 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:29:08.228529   36399 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:29:08.228576   36399 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:29:08.228585   36399 certs.go:256] generating profile certs ...
	I1004 03:29:08.228660   36399 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/client.key
	I1004 03:29:08.228685   36399 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.b5b33258
	I1004 03:29:08.228703   36399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.b5b33258 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65 192.168.39.117 192.168.39.53 192.168.39.254]
	I1004 03:29:08.351888   36399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.b5b33258 ...
	I1004 03:29:08.351919   36399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.b5b33258: {Name:mk0da6460470d3bf380479e3c5bb84dcbb5a8d25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:29:08.352090   36399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.b5b33258 ...
	I1004 03:29:08.352102   36399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.b5b33258: {Name:mka28539f0ab48ed69b4c4b2556a682cc04c0cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:29:08.352167   36399 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt.b5b33258 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt
	I1004 03:29:08.352310   36399 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key.b5b33258 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key
	I1004 03:29:08.352434   36399 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key
	I1004 03:29:08.352448   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:29:08.352461   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:29:08.352471   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:29:08.352484   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:29:08.352496   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:29:08.352509   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:29:08.352519   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:29:08.352530   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:29:08.352580   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:29:08.352607   36399 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:29:08.352614   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:29:08.352636   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:29:08.352653   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:29:08.352671   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:29:08.352710   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:29:08.352735   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:29:08.352749   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:29:08.352761   36399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:29:08.353310   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:29:08.379008   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:29:08.403771   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:29:08.428658   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:29:08.453952   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1004 03:29:08.478680   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 03:29:08.502402   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:29:08.527438   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/ha-994751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:29:08.552971   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:29:08.586986   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:29:08.681081   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:29:08.716489   36399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 03:29:08.738890   36399 ssh_runner.go:195] Run: openssl version
	I1004 03:29:08.745964   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:29:08.759584   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:29:08.764568   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:29:08.764617   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:29:08.771773   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:29:08.783960   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:29:08.802441   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:29:08.807151   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:29:08.807208   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:29:08.813381   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:29:08.823733   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:29:08.842189   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:29:08.852026   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:29:08.852076   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:29:08.859189   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:29:08.872677   36399 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:29:08.879128   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 03:29:08.885847   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 03:29:08.896770   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 03:29:08.904993   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 03:29:08.911769   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 03:29:08.917988   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 03:29:08.927848   36399 kubeadm.go:392] StartCluster: {Name:ha-994751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-994751 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:29:08.927945   36399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 03:29:08.928017   36399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 03:29:08.971398   36399 cri.go:89] found id: "881d54dbcaba719e0166cbf5d8dde7c2cbf92e158ad52bee95e5bd0bac99bac1"
	I1004 03:29:08.971421   36399 cri.go:89] found id: "90728185b95ff345253f97ee7d081a4368d8d3aeb2772fd30044e644b0f79cbe"
	I1004 03:29:08.971426   36399 cri.go:89] found id: "f51ed6216df05368ed6eb52233d1c1286dc4bc22b108acfc5574bdcc5166be94"
	I1004 03:29:08.971431   36399 cri.go:89] found id: "4384dc3c315856ea3eb68d629d7d4c60baaa139d03b85d06551d92212d953265"
	I1004 03:29:08.971435   36399 cri.go:89] found id: "e49637421f2a08b39b3da14ab0afe69cec5437121716c2a15bd1721e2f3947d8"
	I1004 03:29:08.971442   36399 cri.go:89] found id: "eb082a979b36cf62706aeea6fc2b6170f60655a79e04323aee01897eb3551586"
	I1004 03:29:08.971445   36399 cri.go:89] found id: "93aa8fd39f9c0a6eea7a37dcee01631f31b4126eae8333d11e1407860bb3f6cd"
	I1004 03:29:08.971448   36399 cri.go:89] found id: "6a3f40105608f82f73a1bbee29e6b61f1240bcecfdc4768f482ae75a0cf95c99"
	I1004 03:29:08.971451   36399 cri.go:89] found id: "731622c5caa6f3f118f52196f7c510ebf7379862880c26664182d128a54ac160"
	I1004 03:29:08.971456   36399 cri.go:89] found id: "8830f0c28d759db71671a9c3fd1eb4008a66126cb262613e8d011a172e755e0f"
	I1004 03:29:08.971459   36399 cri.go:89] found id: "e49d081b73667cb31a87e5548f9897011cdd79481d389aab65b687ee11c748ec"
	I1004 03:29:08.971462   36399 cri.go:89] found id: "f5568cb7839e2acb5ffc06e849180afcc114c02d6bf373518ef719648eedfeec"
	I1004 03:29:08.971466   36399 cri.go:89] found id: "849282c5067549ba297b7784bd333879a5b9a78de75250ae582db35332ab63fe"
	I1004 03:29:08.971471   36399 cri.go:89] found id: "f041d718c872ffda01314490feb4ed5de2a14dccb6aa02f2328c9dcaf1f5aff8"
	I1004 03:29:08.971483   36399 cri.go:89] found id: ""
	I1004 03:29:08.971534   36399 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-994751 -n ha-994751
helpers_test.go:261: (dbg) Run:  kubectl --context ha-994751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (330.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-355278
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-355278
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-355278: exit status 82 (2m1.910226146s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-355278-m03"  ...
	* Stopping node "multinode-355278-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-355278" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-355278 --wait=true -v=8 --alsologtostderr
E1004 03:52:08.996852   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:52:15.017094   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-355278 --wait=true -v=8 --alsologtostderr: (3m26.168411324s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-355278
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-355278 -n multinode-355278
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-355278 logs -n 25: (2.131865313s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-355278 cp multinode-355278-m02:/home/docker/cp-test.txt                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile498822491/001/cp-test_multinode-355278-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-355278 cp multinode-355278-m02:/home/docker/cp-test.txt                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278:/home/docker/cp-test_multinode-355278-m02_multinode-355278.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n multinode-355278 sudo cat                                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | /home/docker/cp-test_multinode-355278-m02_multinode-355278.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-355278 cp multinode-355278-m02:/home/docker/cp-test.txt                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m03:/home/docker/cp-test_multinode-355278-m02_multinode-355278-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n multinode-355278-m03 sudo cat                                   | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | /home/docker/cp-test_multinode-355278-m02_multinode-355278-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-355278 cp testdata/cp-test.txt                                                | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-355278 cp multinode-355278-m03:/home/docker/cp-test.txt                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile498822491/001/cp-test_multinode-355278-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-355278 cp multinode-355278-m03:/home/docker/cp-test.txt                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278:/home/docker/cp-test_multinode-355278-m03_multinode-355278.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n multinode-355278 sudo cat                                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | /home/docker/cp-test_multinode-355278-m03_multinode-355278.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-355278 cp multinode-355278-m03:/home/docker/cp-test.txt                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m02:/home/docker/cp-test_multinode-355278-m03_multinode-355278-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n multinode-355278-m02 sudo cat                                   | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | /home/docker/cp-test_multinode-355278-m03_multinode-355278-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-355278 node stop m03                                                          | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	| node    | multinode-355278 node start                                                             | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:49 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-355278                                                                | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:49 UTC |                     |
	| stop    | -p multinode-355278                                                                     | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:49 UTC |                     |
	| start   | -p multinode-355278                                                                     | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:51 UTC | 04 Oct 24 03:54 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-355278                                                                | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:54 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 03:51:26
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 03:51:26.261064   48440 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:51:26.261175   48440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:51:26.261187   48440 out.go:358] Setting ErrFile to fd 2...
	I1004 03:51:26.261192   48440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:51:26.261360   48440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 03:51:26.261904   48440 out.go:352] Setting JSON to false
	I1004 03:51:26.262780   48440 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5631,"bootTime":1728008255,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 03:51:26.262867   48440 start.go:139] virtualization: kvm guest
	I1004 03:51:26.265310   48440 out.go:177] * [multinode-355278] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 03:51:26.267351   48440 notify.go:220] Checking for updates...
	I1004 03:51:26.267373   48440 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:51:26.268757   48440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:51:26.270053   48440 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:51:26.271358   48440 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:51:26.272563   48440 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 03:51:26.273922   48440 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:51:26.275577   48440 config.go:182] Loaded profile config "multinode-355278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:51:26.275676   48440 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:51:26.276299   48440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:51:26.276353   48440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:51:26.290976   48440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I1004 03:51:26.291471   48440 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:51:26.292097   48440 main.go:141] libmachine: Using API Version  1
	I1004 03:51:26.292124   48440 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:51:26.292500   48440 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:51:26.292743   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:51:26.326881   48440 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 03:51:26.328198   48440 start.go:297] selected driver: kvm2
	I1004 03:51:26.328215   48440 start.go:901] validating driver "kvm2" against &{Name:multinode-355278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:multinode-355278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:51:26.328342   48440 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:51:26.328717   48440 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:51:26.328794   48440 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 03:51:26.343048   48440 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 03:51:26.343985   48440 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:51:26.344020   48440 cni.go:84] Creating CNI manager for ""
	I1004 03:51:26.344080   48440 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1004 03:51:26.344153   48440 start.go:340] cluster config:
	{Name:multinode-355278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-355278 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:51:26.344338   48440 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:51:26.346085   48440 out.go:177] * Starting "multinode-355278" primary control-plane node in "multinode-355278" cluster
	I1004 03:51:26.347349   48440 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:51:26.347392   48440 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 03:51:26.347404   48440 cache.go:56] Caching tarball of preloaded images
	I1004 03:51:26.347491   48440 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:51:26.347505   48440 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:51:26.347690   48440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/config.json ...
	I1004 03:51:26.347911   48440 start.go:360] acquireMachinesLock for multinode-355278: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:51:26.347964   48440 start.go:364] duration metric: took 35.478µs to acquireMachinesLock for "multinode-355278"
	I1004 03:51:26.347978   48440 start.go:96] Skipping create...Using existing machine configuration
	I1004 03:51:26.347985   48440 fix.go:54] fixHost starting: 
	I1004 03:51:26.348227   48440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:51:26.348274   48440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:51:26.362930   48440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I1004 03:51:26.363367   48440 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:51:26.363798   48440 main.go:141] libmachine: Using API Version  1
	I1004 03:51:26.363821   48440 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:51:26.364117   48440 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:51:26.364308   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:51:26.364456   48440 main.go:141] libmachine: (multinode-355278) Calling .GetState
	I1004 03:51:26.365807   48440 fix.go:112] recreateIfNeeded on multinode-355278: state=Running err=<nil>
	W1004 03:51:26.365822   48440 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 03:51:26.367861   48440 out.go:177] * Updating the running kvm2 "multinode-355278" VM ...
	I1004 03:51:26.369300   48440 machine.go:93] provisionDockerMachine start ...
	I1004 03:51:26.369315   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:51:26.369491   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:51:26.371649   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.372089   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:51:26.372112   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.372271   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:51:26.372418   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:26.372590   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:26.372675   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:51:26.372808   48440 main.go:141] libmachine: Using SSH client type: native
	I1004 03:51:26.373020   48440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1004 03:51:26.373032   48440 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 03:51:26.477209   48440 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-355278
	
	I1004 03:51:26.477235   48440 main.go:141] libmachine: (multinode-355278) Calling .GetMachineName
	I1004 03:51:26.477493   48440 buildroot.go:166] provisioning hostname "multinode-355278"
	I1004 03:51:26.477554   48440 main.go:141] libmachine: (multinode-355278) Calling .GetMachineName
	I1004 03:51:26.477738   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:51:26.480506   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.480934   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:51:26.480966   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.481074   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:51:26.481244   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:26.481377   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:26.481510   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:51:26.481681   48440 main.go:141] libmachine: Using SSH client type: native
	I1004 03:51:26.481828   48440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1004 03:51:26.481839   48440 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-355278 && echo "multinode-355278" | sudo tee /etc/hostname
	I1004 03:51:26.599991   48440 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-355278
	
	I1004 03:51:26.600022   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:51:26.602653   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.602995   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:51:26.603026   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.603138   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:51:26.603318   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:26.603461   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:26.603592   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:51:26.603724   48440 main.go:141] libmachine: Using SSH client type: native
	I1004 03:51:26.603924   48440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1004 03:51:26.603943   48440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-355278' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-355278/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-355278' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:51:26.704494   48440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:51:26.704534   48440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:51:26.704550   48440 buildroot.go:174] setting up certificates
	I1004 03:51:26.704558   48440 provision.go:84] configureAuth start
	I1004 03:51:26.704566   48440 main.go:141] libmachine: (multinode-355278) Calling .GetMachineName
	I1004 03:51:26.704792   48440 main.go:141] libmachine: (multinode-355278) Calling .GetIP
	I1004 03:51:26.707494   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.707965   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:51:26.707984   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.708198   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:51:26.710305   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.710651   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:51:26.710689   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.710819   48440 provision.go:143] copyHostCerts
	I1004 03:51:26.710848   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:51:26.710890   48440 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:51:26.710901   48440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:51:26.710968   48440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:51:26.711053   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:51:26.711070   48440 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:51:26.711077   48440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:51:26.711104   48440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:51:26.711185   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:51:26.711208   48440 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:51:26.711214   48440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:51:26.711237   48440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:51:26.711306   48440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.multinode-355278 san=[127.0.0.1 192.168.39.50 localhost minikube multinode-355278]
	I1004 03:51:26.932325   48440 provision.go:177] copyRemoteCerts
	I1004 03:51:26.932377   48440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:51:26.932397   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:51:26.935078   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.935395   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:51:26.935427   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.935598   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:51:26.935798   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:26.935937   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:51:26.936075   48440 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/multinode-355278/id_rsa Username:docker}
	I1004 03:51:27.018530   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:51:27.018602   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:51:27.045600   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:51:27.045672   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1004 03:51:27.070523   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:51:27.070591   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 03:51:27.096314   48440 provision.go:87] duration metric: took 391.744632ms to configureAuth
	I1004 03:51:27.096339   48440 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:51:27.096534   48440 config.go:182] Loaded profile config "multinode-355278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:51:27.096615   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:51:27.099086   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:27.099430   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:51:27.099463   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:27.099654   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:51:27.099838   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:27.099947   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:27.100083   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:51:27.100235   48440 main.go:141] libmachine: Using SSH client type: native
	I1004 03:51:27.100389   48440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1004 03:51:27.100403   48440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:52:57.790704   48440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:52:57.790738   48440 machine.go:96] duration metric: took 1m31.421425476s to provisionDockerMachine
	I1004 03:52:57.790751   48440 start.go:293] postStartSetup for "multinode-355278" (driver="kvm2")
	I1004 03:52:57.790762   48440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:52:57.790780   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:52:57.791081   48440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:52:57.791112   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:52:57.794371   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:57.794760   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:52:57.794786   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:57.794966   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:52:57.795156   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:52:57.795361   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:52:57.795582   48440 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/multinode-355278/id_rsa Username:docker}
	I1004 03:52:57.879285   48440 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:52:57.883739   48440 command_runner.go:130] > NAME=Buildroot
	I1004 03:52:57.883751   48440 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1004 03:52:57.883755   48440 command_runner.go:130] > ID=buildroot
	I1004 03:52:57.883786   48440 command_runner.go:130] > VERSION_ID=2023.02.9
	I1004 03:52:57.883795   48440 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1004 03:52:57.883874   48440 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:52:57.883893   48440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:52:57.883946   48440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:52:57.884013   48440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:52:57.884022   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:52:57.884101   48440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:52:57.893854   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:52:57.917904   48440 start.go:296] duration metric: took 127.142001ms for postStartSetup
	I1004 03:52:57.917937   48440 fix.go:56] duration metric: took 1m31.569951652s for fixHost
	I1004 03:52:57.917955   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:52:57.920747   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:57.921107   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:52:57.921130   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:57.921262   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:52:57.921464   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:52:57.921624   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:52:57.921808   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:52:57.921965   48440 main.go:141] libmachine: Using SSH client type: native
	I1004 03:52:57.922150   48440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1004 03:52:57.922165   48440 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:52:58.024738   48440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728013978.001547172
	
	I1004 03:52:58.024755   48440 fix.go:216] guest clock: 1728013978.001547172
	I1004 03:52:58.024762   48440 fix.go:229] Guest: 2024-10-04 03:52:58.001547172 +0000 UTC Remote: 2024-10-04 03:52:57.917940758 +0000 UTC m=+91.691074504 (delta=83.606414ms)
	I1004 03:52:58.024800   48440 fix.go:200] guest clock delta is within tolerance: 83.606414ms
	I1004 03:52:58.024805   48440 start.go:83] releasing machines lock for "multinode-355278", held for 1m31.676831925s
	I1004 03:52:58.024824   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:52:58.025092   48440 main.go:141] libmachine: (multinode-355278) Calling .GetIP
	I1004 03:52:58.027746   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:58.028153   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:52:58.028178   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:58.028289   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:52:58.028724   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:52:58.028876   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:52:58.028992   48440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:52:58.029039   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:52:58.029079   48440 ssh_runner.go:195] Run: cat /version.json
	I1004 03:52:58.029102   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:52:58.031775   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:58.031848   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:58.032157   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:52:58.032203   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:58.032234   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:52:58.032251   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:58.032308   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:52:58.032479   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:52:58.032482   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:52:58.032643   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:52:58.032656   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:52:58.032812   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:52:58.032822   48440 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/multinode-355278/id_rsa Username:docker}
	I1004 03:52:58.032932   48440 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/multinode-355278/id_rsa Username:docker}
	I1004 03:52:58.132884   48440 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I1004 03:52:58.132960   48440 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1004 03:52:58.133033   48440 ssh_runner.go:195] Run: systemctl --version
	I1004 03:52:58.139007   48440 command_runner.go:130] > systemd 252 (252)
	I1004 03:52:58.139046   48440 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1004 03:52:58.139221   48440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:52:58.294982   48440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 03:52:58.303619   48440 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1004 03:52:58.304001   48440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:52:58.304076   48440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:52:58.313833   48440 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1004 03:52:58.313854   48440 start.go:495] detecting cgroup driver to use...
	I1004 03:52:58.313908   48440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:52:58.330972   48440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:52:58.346135   48440 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:52:58.346194   48440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:52:58.360002   48440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:52:58.374078   48440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:52:58.535942   48440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:52:58.671399   48440 docker.go:233] disabling docker service ...
	I1004 03:52:58.671474   48440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:52:58.692282   48440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:52:58.756406   48440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:52:58.937569   48440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:52:59.123278   48440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:52:59.138558   48440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:52:59.157588   48440 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1004 03:52:59.157629   48440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:52:59.157674   48440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:52:59.168003   48440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:52:59.168061   48440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:52:59.178608   48440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:52:59.189546   48440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:52:59.199660   48440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:52:59.210160   48440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:52:59.220504   48440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:52:59.231726   48440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:52:59.242076   48440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:52:59.251321   48440 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1004 03:52:59.251414   48440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:52:59.261227   48440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:52:59.401493   48440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:53:09.326928   48440 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.925397322s)
	I1004 03:53:09.326957   48440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:53:09.327024   48440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:53:09.332077   48440 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1004 03:53:09.332101   48440 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1004 03:53:09.332107   48440 command_runner.go:130] > Device: 0,22	Inode: 1390        Links: 1
	I1004 03:53:09.332114   48440 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1004 03:53:09.332121   48440 command_runner.go:130] > Access: 2024-10-04 03:53:09.153782493 +0000
	I1004 03:53:09.332127   48440 command_runner.go:130] > Modify: 2024-10-04 03:53:09.153782493 +0000
	I1004 03:53:09.332133   48440 command_runner.go:130] > Change: 2024-10-04 03:53:09.153782493 +0000
	I1004 03:53:09.332139   48440 command_runner.go:130] >  Birth: -
	I1004 03:53:09.332404   48440 start.go:563] Will wait 60s for crictl version
	I1004 03:53:09.332456   48440 ssh_runner.go:195] Run: which crictl
	I1004 03:53:09.336585   48440 command_runner.go:130] > /usr/bin/crictl
	I1004 03:53:09.336725   48440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:53:09.377019   48440 command_runner.go:130] > Version:  0.1.0
	I1004 03:53:09.377042   48440 command_runner.go:130] > RuntimeName:  cri-o
	I1004 03:53:09.377047   48440 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1004 03:53:09.377052   48440 command_runner.go:130] > RuntimeApiVersion:  v1
	I1004 03:53:09.377235   48440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:53:09.377322   48440 ssh_runner.go:195] Run: crio --version
	I1004 03:53:09.405286   48440 command_runner.go:130] > crio version 1.29.1
	I1004 03:53:09.405313   48440 command_runner.go:130] > Version:        1.29.1
	I1004 03:53:09.405319   48440 command_runner.go:130] > GitCommit:      unknown
	I1004 03:53:09.405323   48440 command_runner.go:130] > GitCommitDate:  unknown
	I1004 03:53:09.405327   48440 command_runner.go:130] > GitTreeState:   clean
	I1004 03:53:09.405337   48440 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1004 03:53:09.405342   48440 command_runner.go:130] > GoVersion:      go1.21.6
	I1004 03:53:09.405346   48440 command_runner.go:130] > Compiler:       gc
	I1004 03:53:09.405350   48440 command_runner.go:130] > Platform:       linux/amd64
	I1004 03:53:09.405354   48440 command_runner.go:130] > Linkmode:       dynamic
	I1004 03:53:09.405359   48440 command_runner.go:130] > BuildTags:      
	I1004 03:53:09.405363   48440 command_runner.go:130] >   containers_image_ostree_stub
	I1004 03:53:09.405368   48440 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1004 03:53:09.405372   48440 command_runner.go:130] >   btrfs_noversion
	I1004 03:53:09.405376   48440 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1004 03:53:09.405380   48440 command_runner.go:130] >   libdm_no_deferred_remove
	I1004 03:53:09.405385   48440 command_runner.go:130] >   seccomp
	I1004 03:53:09.405393   48440 command_runner.go:130] > LDFlags:          unknown
	I1004 03:53:09.405397   48440 command_runner.go:130] > SeccompEnabled:   true
	I1004 03:53:09.405403   48440 command_runner.go:130] > AppArmorEnabled:  false
	I1004 03:53:09.406531   48440 ssh_runner.go:195] Run: crio --version
	I1004 03:53:09.435327   48440 command_runner.go:130] > crio version 1.29.1
	I1004 03:53:09.435347   48440 command_runner.go:130] > Version:        1.29.1
	I1004 03:53:09.435353   48440 command_runner.go:130] > GitCommit:      unknown
	I1004 03:53:09.435358   48440 command_runner.go:130] > GitCommitDate:  unknown
	I1004 03:53:09.435362   48440 command_runner.go:130] > GitTreeState:   clean
	I1004 03:53:09.435367   48440 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1004 03:53:09.435371   48440 command_runner.go:130] > GoVersion:      go1.21.6
	I1004 03:53:09.435375   48440 command_runner.go:130] > Compiler:       gc
	I1004 03:53:09.435380   48440 command_runner.go:130] > Platform:       linux/amd64
	I1004 03:53:09.435384   48440 command_runner.go:130] > Linkmode:       dynamic
	I1004 03:53:09.435388   48440 command_runner.go:130] > BuildTags:      
	I1004 03:53:09.435392   48440 command_runner.go:130] >   containers_image_ostree_stub
	I1004 03:53:09.435396   48440 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1004 03:53:09.435401   48440 command_runner.go:130] >   btrfs_noversion
	I1004 03:53:09.435407   48440 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1004 03:53:09.435412   48440 command_runner.go:130] >   libdm_no_deferred_remove
	I1004 03:53:09.435417   48440 command_runner.go:130] >   seccomp
	I1004 03:53:09.435423   48440 command_runner.go:130] > LDFlags:          unknown
	I1004 03:53:09.435428   48440 command_runner.go:130] > SeccompEnabled:   true
	I1004 03:53:09.435434   48440 command_runner.go:130] > AppArmorEnabled:  false
	I1004 03:53:09.437713   48440 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:53:09.438991   48440 main.go:141] libmachine: (multinode-355278) Calling .GetIP
	I1004 03:53:09.441506   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:53:09.441876   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:53:09.441903   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:53:09.442142   48440 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:53:09.446358   48440 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1004 03:53:09.446534   48440 kubeadm.go:883] updating cluster {Name:multinode-355278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-355278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 03:53:09.446689   48440 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:53:09.446747   48440 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:53:09.486600   48440 command_runner.go:130] > {
	I1004 03:53:09.486628   48440 command_runner.go:130] >   "images": [
	I1004 03:53:09.486635   48440 command_runner.go:130] >     {
	I1004 03:53:09.486649   48440 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1004 03:53:09.486658   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.486668   48440 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1004 03:53:09.486673   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486679   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.486692   48440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1004 03:53:09.486702   48440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1004 03:53:09.486712   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486717   48440 command_runner.go:130] >       "size": "87190579",
	I1004 03:53:09.486724   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.486729   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.486738   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.486749   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.486754   48440 command_runner.go:130] >     },
	I1004 03:53:09.486759   48440 command_runner.go:130] >     {
	I1004 03:53:09.486770   48440 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1004 03:53:09.486776   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.486785   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1004 03:53:09.486791   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486798   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.486809   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1004 03:53:09.486823   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1004 03:53:09.486829   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486836   48440 command_runner.go:130] >       "size": "1363676",
	I1004 03:53:09.486842   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.486854   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.486859   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.486864   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.486867   48440 command_runner.go:130] >     },
	I1004 03:53:09.486871   48440 command_runner.go:130] >     {
	I1004 03:53:09.486876   48440 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1004 03:53:09.486883   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.486888   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1004 03:53:09.486891   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486895   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.486902   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1004 03:53:09.486910   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1004 03:53:09.486913   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486917   48440 command_runner.go:130] >       "size": "31470524",
	I1004 03:53:09.486921   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.486925   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.486929   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.486933   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.486938   48440 command_runner.go:130] >     },
	I1004 03:53:09.486941   48440 command_runner.go:130] >     {
	I1004 03:53:09.486947   48440 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1004 03:53:09.486952   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.486956   48440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1004 03:53:09.486959   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486963   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.486972   48440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1004 03:53:09.486982   48440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1004 03:53:09.486987   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486991   48440 command_runner.go:130] >       "size": "63273227",
	I1004 03:53:09.486994   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.486998   48440 command_runner.go:130] >       "username": "nonroot",
	I1004 03:53:09.487002   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.487006   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.487011   48440 command_runner.go:130] >     },
	I1004 03:53:09.487016   48440 command_runner.go:130] >     {
	I1004 03:53:09.487022   48440 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1004 03:53:09.487025   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.487029   48440 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1004 03:53:09.487033   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487037   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.487043   48440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1004 03:53:09.487051   48440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1004 03:53:09.487065   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487069   48440 command_runner.go:130] >       "size": "149009664",
	I1004 03:53:09.487072   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.487076   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.487079   48440 command_runner.go:130] >       },
	I1004 03:53:09.487083   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.487086   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.487090   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.487094   48440 command_runner.go:130] >     },
	I1004 03:53:09.487097   48440 command_runner.go:130] >     {
	I1004 03:53:09.487103   48440 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1004 03:53:09.487107   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.487112   48440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1004 03:53:09.487116   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487119   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.487126   48440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1004 03:53:09.487134   48440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1004 03:53:09.487137   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487141   48440 command_runner.go:130] >       "size": "95237600",
	I1004 03:53:09.487147   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.487151   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.487154   48440 command_runner.go:130] >       },
	I1004 03:53:09.487158   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.487163   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.487170   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.487174   48440 command_runner.go:130] >     },
	I1004 03:53:09.487178   48440 command_runner.go:130] >     {
	I1004 03:53:09.487183   48440 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1004 03:53:09.487193   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.487200   48440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1004 03:53:09.487204   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487210   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.487218   48440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1004 03:53:09.487227   48440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1004 03:53:09.487231   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487235   48440 command_runner.go:130] >       "size": "89437508",
	I1004 03:53:09.487239   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.487242   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.487246   48440 command_runner.go:130] >       },
	I1004 03:53:09.487250   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.487255   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.487259   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.487262   48440 command_runner.go:130] >     },
	I1004 03:53:09.487266   48440 command_runner.go:130] >     {
	I1004 03:53:09.487272   48440 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1004 03:53:09.487277   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.487282   48440 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1004 03:53:09.487288   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487292   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.487306   48440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1004 03:53:09.487316   48440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1004 03:53:09.487321   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487327   48440 command_runner.go:130] >       "size": "92733849",
	I1004 03:53:09.487331   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.487335   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.487341   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.487345   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.487351   48440 command_runner.go:130] >     },
	I1004 03:53:09.487354   48440 command_runner.go:130] >     {
	I1004 03:53:09.487360   48440 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1004 03:53:09.487364   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.487369   48440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1004 03:53:09.487372   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487375   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.487393   48440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1004 03:53:09.487400   48440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1004 03:53:09.487403   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487407   48440 command_runner.go:130] >       "size": "68420934",
	I1004 03:53:09.487410   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.487414   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.487418   48440 command_runner.go:130] >       },
	I1004 03:53:09.487421   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.487425   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.487429   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.487432   48440 command_runner.go:130] >     },
	I1004 03:53:09.487435   48440 command_runner.go:130] >     {
	I1004 03:53:09.487442   48440 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1004 03:53:09.487446   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.487450   48440 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1004 03:53:09.487453   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487457   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.487463   48440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1004 03:53:09.487469   48440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1004 03:53:09.487472   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487475   48440 command_runner.go:130] >       "size": "742080",
	I1004 03:53:09.487479   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.487483   48440 command_runner.go:130] >         "value": "65535"
	I1004 03:53:09.487486   48440 command_runner.go:130] >       },
	I1004 03:53:09.487490   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.487493   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.487498   48440 command_runner.go:130] >       "pinned": true
	I1004 03:53:09.487504   48440 command_runner.go:130] >     }
	I1004 03:53:09.487507   48440 command_runner.go:130] >   ]
	I1004 03:53:09.487510   48440 command_runner.go:130] > }
	I1004 03:53:09.487675   48440 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:53:09.487689   48440 crio.go:433] Images already preloaded, skipping extraction
	I1004 03:53:09.487742   48440 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:53:09.520592   48440 command_runner.go:130] > {
	I1004 03:53:09.520620   48440 command_runner.go:130] >   "images": [
	I1004 03:53:09.520626   48440 command_runner.go:130] >     {
	I1004 03:53:09.520637   48440 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1004 03:53:09.520645   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.520658   48440 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1004 03:53:09.520663   48440 command_runner.go:130] >       ],
	I1004 03:53:09.520669   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.520681   48440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1004 03:53:09.520692   48440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1004 03:53:09.520702   48440 command_runner.go:130] >       ],
	I1004 03:53:09.520710   48440 command_runner.go:130] >       "size": "87190579",
	I1004 03:53:09.520716   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.520726   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.520749   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.520760   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.520766   48440 command_runner.go:130] >     },
	I1004 03:53:09.520772   48440 command_runner.go:130] >     {
	I1004 03:53:09.520781   48440 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1004 03:53:09.520790   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.520798   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1004 03:53:09.520804   48440 command_runner.go:130] >       ],
	I1004 03:53:09.520811   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.520819   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1004 03:53:09.520828   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1004 03:53:09.520832   48440 command_runner.go:130] >       ],
	I1004 03:53:09.520836   48440 command_runner.go:130] >       "size": "1363676",
	I1004 03:53:09.520842   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.520848   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.520851   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.520855   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.520859   48440 command_runner.go:130] >     },
	I1004 03:53:09.520862   48440 command_runner.go:130] >     {
	I1004 03:53:09.520867   48440 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1004 03:53:09.520874   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.520879   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1004 03:53:09.520885   48440 command_runner.go:130] >       ],
	I1004 03:53:09.520889   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.520897   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1004 03:53:09.520906   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1004 03:53:09.520909   48440 command_runner.go:130] >       ],
	I1004 03:53:09.520914   48440 command_runner.go:130] >       "size": "31470524",
	I1004 03:53:09.520917   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.520921   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.520925   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.520928   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.520932   48440 command_runner.go:130] >     },
	I1004 03:53:09.520935   48440 command_runner.go:130] >     {
	I1004 03:53:09.520941   48440 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1004 03:53:09.520947   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.520951   48440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1004 03:53:09.520955   48440 command_runner.go:130] >       ],
	I1004 03:53:09.520959   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.520972   48440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1004 03:53:09.520990   48440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1004 03:53:09.520999   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521007   48440 command_runner.go:130] >       "size": "63273227",
	I1004 03:53:09.521014   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.521019   48440 command_runner.go:130] >       "username": "nonroot",
	I1004 03:53:09.521028   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.521032   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.521035   48440 command_runner.go:130] >     },
	I1004 03:53:09.521039   48440 command_runner.go:130] >     {
	I1004 03:53:09.521045   48440 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1004 03:53:09.521051   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.521055   48440 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1004 03:53:09.521061   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521065   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.521071   48440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1004 03:53:09.521079   48440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1004 03:53:09.521083   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521088   48440 command_runner.go:130] >       "size": "149009664",
	I1004 03:53:09.521093   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.521097   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.521101   48440 command_runner.go:130] >       },
	I1004 03:53:09.521106   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.521110   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.521116   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.521120   48440 command_runner.go:130] >     },
	I1004 03:53:09.521128   48440 command_runner.go:130] >     {
	I1004 03:53:09.521137   48440 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1004 03:53:09.521141   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.521146   48440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1004 03:53:09.521150   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521154   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.521163   48440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1004 03:53:09.521170   48440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1004 03:53:09.521176   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521180   48440 command_runner.go:130] >       "size": "95237600",
	I1004 03:53:09.521184   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.521187   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.521191   48440 command_runner.go:130] >       },
	I1004 03:53:09.521195   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.521200   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.521209   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.521214   48440 command_runner.go:130] >     },
	I1004 03:53:09.521218   48440 command_runner.go:130] >     {
	I1004 03:53:09.521224   48440 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1004 03:53:09.521231   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.521236   48440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1004 03:53:09.521242   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521246   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.521253   48440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1004 03:53:09.521263   48440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1004 03:53:09.521270   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521279   48440 command_runner.go:130] >       "size": "89437508",
	I1004 03:53:09.521284   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.521291   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.521299   48440 command_runner.go:130] >       },
	I1004 03:53:09.521304   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.521312   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.521318   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.521326   48440 command_runner.go:130] >     },
	I1004 03:53:09.521331   48440 command_runner.go:130] >     {
	I1004 03:53:09.521341   48440 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1004 03:53:09.521350   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.521357   48440 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1004 03:53:09.521365   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521371   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.521394   48440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1004 03:53:09.521410   48440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1004 03:53:09.521417   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521424   48440 command_runner.go:130] >       "size": "92733849",
	I1004 03:53:09.521432   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.521436   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.521443   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.521449   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.521457   48440 command_runner.go:130] >     },
	I1004 03:53:09.521464   48440 command_runner.go:130] >     {
	I1004 03:53:09.521477   48440 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1004 03:53:09.521486   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.521494   48440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1004 03:53:09.521502   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521509   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.521524   48440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1004 03:53:09.521541   48440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1004 03:53:09.521548   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521553   48440 command_runner.go:130] >       "size": "68420934",
	I1004 03:53:09.521559   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.521563   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.521568   48440 command_runner.go:130] >       },
	I1004 03:53:09.521575   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.521580   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.521590   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.521595   48440 command_runner.go:130] >     },
	I1004 03:53:09.521603   48440 command_runner.go:130] >     {
	I1004 03:53:09.521612   48440 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1004 03:53:09.521621   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.521627   48440 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1004 03:53:09.521633   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521642   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.521652   48440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1004 03:53:09.521673   48440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1004 03:53:09.521682   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521689   48440 command_runner.go:130] >       "size": "742080",
	I1004 03:53:09.521697   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.521704   48440 command_runner.go:130] >         "value": "65535"
	I1004 03:53:09.521709   48440 command_runner.go:130] >       },
	I1004 03:53:09.521718   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.521724   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.521734   48440 command_runner.go:130] >       "pinned": true
	I1004 03:53:09.521742   48440 command_runner.go:130] >     }
	I1004 03:53:09.521748   48440 command_runner.go:130] >   ]
	I1004 03:53:09.521755   48440 command_runner.go:130] > }
	I1004 03:53:09.521877   48440 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:53:09.521887   48440 cache_images.go:84] Images are preloaded, skipping loading
	I1004 03:53:09.521894   48440 kubeadm.go:934] updating node { 192.168.39.50 8443 v1.31.1 crio true true} ...
	I1004 03:53:09.521981   48440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-355278 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-355278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:53:09.522042   48440 ssh_runner.go:195] Run: crio config
	I1004 03:53:09.564316   48440 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1004 03:53:09.564348   48440 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1004 03:53:09.564359   48440 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1004 03:53:09.564363   48440 command_runner.go:130] > #
	I1004 03:53:09.564374   48440 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1004 03:53:09.564383   48440 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1004 03:53:09.564394   48440 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1004 03:53:09.564405   48440 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1004 03:53:09.564412   48440 command_runner.go:130] > # reload'.
	I1004 03:53:09.564422   48440 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1004 03:53:09.564434   48440 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1004 03:53:09.564446   48440 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1004 03:53:09.564457   48440 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1004 03:53:09.564463   48440 command_runner.go:130] > [crio]
	I1004 03:53:09.564473   48440 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1004 03:53:09.564484   48440 command_runner.go:130] > # containers images, in this directory.
	I1004 03:53:09.564491   48440 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1004 03:53:09.564519   48440 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1004 03:53:09.564531   48440 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1004 03:53:09.564543   48440 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1004 03:53:09.564866   48440 command_runner.go:130] > # imagestore = ""
	I1004 03:53:09.564890   48440 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1004 03:53:09.564900   48440 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1004 03:53:09.564908   48440 command_runner.go:130] > storage_driver = "overlay"
	I1004 03:53:09.564917   48440 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1004 03:53:09.564935   48440 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1004 03:53:09.564940   48440 command_runner.go:130] > storage_option = [
	I1004 03:53:09.564949   48440 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1004 03:53:09.565030   48440 command_runner.go:130] > ]
	I1004 03:53:09.565053   48440 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1004 03:53:09.565063   48440 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1004 03:53:09.565074   48440 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1004 03:53:09.565084   48440 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1004 03:53:09.565093   48440 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1004 03:53:09.565100   48440 command_runner.go:130] > # always happen on a node reboot
	I1004 03:53:09.565111   48440 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1004 03:53:09.565131   48440 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1004 03:53:09.565143   48440 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1004 03:53:09.565153   48440 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1004 03:53:09.565164   48440 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1004 03:53:09.565178   48440 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1004 03:53:09.565193   48440 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1004 03:53:09.565205   48440 command_runner.go:130] > # internal_wipe = true
	I1004 03:53:09.565222   48440 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1004 03:53:09.565233   48440 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1004 03:53:09.565243   48440 command_runner.go:130] > # internal_repair = false
	I1004 03:53:09.565252   48440 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1004 03:53:09.565264   48440 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1004 03:53:09.565273   48440 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1004 03:53:09.565284   48440 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1004 03:53:09.565296   48440 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1004 03:53:09.565305   48440 command_runner.go:130] > [crio.api]
	I1004 03:53:09.565313   48440 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1004 03:53:09.565324   48440 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1004 03:53:09.565342   48440 command_runner.go:130] > # IP address on which the stream server will listen.
	I1004 03:53:09.565352   48440 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1004 03:53:09.565360   48440 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1004 03:53:09.565366   48440 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1004 03:53:09.565370   48440 command_runner.go:130] > # stream_port = "0"
	I1004 03:53:09.565375   48440 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1004 03:53:09.565381   48440 command_runner.go:130] > # stream_enable_tls = false
	I1004 03:53:09.565387   48440 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1004 03:53:09.565395   48440 command_runner.go:130] > # stream_idle_timeout = ""
	I1004 03:53:09.565405   48440 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1004 03:53:09.565419   48440 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1004 03:53:09.565428   48440 command_runner.go:130] > # minutes.
	I1004 03:53:09.565435   48440 command_runner.go:130] > # stream_tls_cert = ""
	I1004 03:53:09.565446   48440 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1004 03:53:09.565458   48440 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1004 03:53:09.565471   48440 command_runner.go:130] > # stream_tls_key = ""
	I1004 03:53:09.565486   48440 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1004 03:53:09.565496   48440 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1004 03:53:09.565511   48440 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1004 03:53:09.565520   48440 command_runner.go:130] > # stream_tls_ca = ""
	I1004 03:53:09.565534   48440 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1004 03:53:09.565545   48440 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1004 03:53:09.565558   48440 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1004 03:53:09.565568   48440 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1004 03:53:09.565578   48440 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1004 03:53:09.565589   48440 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1004 03:53:09.565598   48440 command_runner.go:130] > [crio.runtime]
	I1004 03:53:09.565609   48440 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1004 03:53:09.565620   48440 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1004 03:53:09.565628   48440 command_runner.go:130] > # "nofile=1024:2048"
	I1004 03:53:09.565639   48440 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1004 03:53:09.565648   48440 command_runner.go:130] > # default_ulimits = [
	I1004 03:53:09.565653   48440 command_runner.go:130] > # ]
	I1004 03:53:09.565662   48440 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1004 03:53:09.565669   48440 command_runner.go:130] > # no_pivot = false
	I1004 03:53:09.565681   48440 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1004 03:53:09.565693   48440 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1004 03:53:09.565706   48440 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1004 03:53:09.565722   48440 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1004 03:53:09.565730   48440 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1004 03:53:09.565736   48440 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1004 03:53:09.565743   48440 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1004 03:53:09.565747   48440 command_runner.go:130] > # Cgroup setting for conmon
	I1004 03:53:09.565756   48440 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1004 03:53:09.565759   48440 command_runner.go:130] > conmon_cgroup = "pod"
	I1004 03:53:09.565767   48440 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1004 03:53:09.565772   48440 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1004 03:53:09.565780   48440 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1004 03:53:09.565783   48440 command_runner.go:130] > conmon_env = [
	I1004 03:53:09.565794   48440 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1004 03:53:09.565799   48440 command_runner.go:130] > ]
	I1004 03:53:09.565807   48440 command_runner.go:130] > # Additional environment variables to set for all the
	I1004 03:53:09.565815   48440 command_runner.go:130] > # containers. These are overridden if set in the
	I1004 03:53:09.565827   48440 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1004 03:53:09.565836   48440 command_runner.go:130] > # default_env = [
	I1004 03:53:09.565844   48440 command_runner.go:130] > # ]
	I1004 03:53:09.565856   48440 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1004 03:53:09.565868   48440 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1004 03:53:09.565877   48440 command_runner.go:130] > # selinux = false
	I1004 03:53:09.565887   48440 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1004 03:53:09.565900   48440 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1004 03:53:09.565912   48440 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1004 03:53:09.565920   48440 command_runner.go:130] > # seccomp_profile = ""
	I1004 03:53:09.565929   48440 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1004 03:53:09.565945   48440 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1004 03:53:09.565959   48440 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1004 03:53:09.565969   48440 command_runner.go:130] > # which might increase security.
	I1004 03:53:09.565978   48440 command_runner.go:130] > # This option is currently deprecated,
	I1004 03:53:09.565989   48440 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1004 03:53:09.565998   48440 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1004 03:53:09.566011   48440 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1004 03:53:09.566022   48440 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1004 03:53:09.566033   48440 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1004 03:53:09.566044   48440 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1004 03:53:09.566054   48440 command_runner.go:130] > # This option supports live configuration reload.
	I1004 03:53:09.566065   48440 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1004 03:53:09.566077   48440 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1004 03:53:09.566088   48440 command_runner.go:130] > # the cgroup blockio controller.
	I1004 03:53:09.566095   48440 command_runner.go:130] > # blockio_config_file = ""
	I1004 03:53:09.566106   48440 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1004 03:53:09.566115   48440 command_runner.go:130] > # blockio parameters.
	I1004 03:53:09.566121   48440 command_runner.go:130] > # blockio_reload = false
	I1004 03:53:09.566131   48440 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1004 03:53:09.566141   48440 command_runner.go:130] > # irqbalance daemon.
	I1004 03:53:09.566149   48440 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1004 03:53:09.566162   48440 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1004 03:53:09.566175   48440 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1004 03:53:09.566188   48440 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1004 03:53:09.566202   48440 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1004 03:53:09.566215   48440 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1004 03:53:09.566226   48440 command_runner.go:130] > # This option supports live configuration reload.
	I1004 03:53:09.566233   48440 command_runner.go:130] > # rdt_config_file = ""
	I1004 03:53:09.566244   48440 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1004 03:53:09.566253   48440 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1004 03:53:09.566272   48440 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1004 03:53:09.566282   48440 command_runner.go:130] > # separate_pull_cgroup = ""
	I1004 03:53:09.566292   48440 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1004 03:53:09.566304   48440 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1004 03:53:09.566310   48440 command_runner.go:130] > # will be added.
	I1004 03:53:09.566319   48440 command_runner.go:130] > # default_capabilities = [
	I1004 03:53:09.566325   48440 command_runner.go:130] > # 	"CHOWN",
	I1004 03:53:09.566334   48440 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1004 03:53:09.566340   48440 command_runner.go:130] > # 	"FSETID",
	I1004 03:53:09.566349   48440 command_runner.go:130] > # 	"FOWNER",
	I1004 03:53:09.566357   48440 command_runner.go:130] > # 	"SETGID",
	I1004 03:53:09.566365   48440 command_runner.go:130] > # 	"SETUID",
	I1004 03:53:09.566371   48440 command_runner.go:130] > # 	"SETPCAP",
	I1004 03:53:09.566380   48440 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1004 03:53:09.566386   48440 command_runner.go:130] > # 	"KILL",
	I1004 03:53:09.566394   48440 command_runner.go:130] > # ]
	I1004 03:53:09.566405   48440 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1004 03:53:09.566419   48440 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1004 03:53:09.566429   48440 command_runner.go:130] > # add_inheritable_capabilities = false
	I1004 03:53:09.566438   48440 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1004 03:53:09.566449   48440 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1004 03:53:09.566455   48440 command_runner.go:130] > default_sysctls = [
	I1004 03:53:09.566481   48440 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1004 03:53:09.566491   48440 command_runner.go:130] > ]
	I1004 03:53:09.566499   48440 command_runner.go:130] > # List of devices on the host that a
	I1004 03:53:09.566511   48440 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1004 03:53:09.566519   48440 command_runner.go:130] > # allowed_devices = [
	I1004 03:53:09.566530   48440 command_runner.go:130] > # 	"/dev/fuse",
	I1004 03:53:09.566535   48440 command_runner.go:130] > # ]
	I1004 03:53:09.566546   48440 command_runner.go:130] > # List of additional devices. specified as
	I1004 03:53:09.566557   48440 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1004 03:53:09.566568   48440 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1004 03:53:09.566578   48440 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1004 03:53:09.566587   48440 command_runner.go:130] > # additional_devices = [
	I1004 03:53:09.566592   48440 command_runner.go:130] > # ]
	I1004 03:53:09.566602   48440 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1004 03:53:09.566612   48440 command_runner.go:130] > # cdi_spec_dirs = [
	I1004 03:53:09.566618   48440 command_runner.go:130] > # 	"/etc/cdi",
	I1004 03:53:09.566627   48440 command_runner.go:130] > # 	"/var/run/cdi",
	I1004 03:53:09.566633   48440 command_runner.go:130] > # ]
	I1004 03:53:09.566645   48440 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1004 03:53:09.566657   48440 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1004 03:53:09.566667   48440 command_runner.go:130] > # Defaults to false.
	I1004 03:53:09.566676   48440 command_runner.go:130] > # device_ownership_from_security_context = false
	I1004 03:53:09.566688   48440 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1004 03:53:09.566697   48440 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1004 03:53:09.566706   48440 command_runner.go:130] > # hooks_dir = [
	I1004 03:53:09.566713   48440 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1004 03:53:09.566719   48440 command_runner.go:130] > # ]
	I1004 03:53:09.566725   48440 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1004 03:53:09.566732   48440 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1004 03:53:09.566737   48440 command_runner.go:130] > # its default mounts from the following two files:
	I1004 03:53:09.566739   48440 command_runner.go:130] > #
	I1004 03:53:09.566745   48440 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1004 03:53:09.566754   48440 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1004 03:53:09.566759   48440 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1004 03:53:09.566765   48440 command_runner.go:130] > #
	I1004 03:53:09.566774   48440 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1004 03:53:09.566787   48440 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1004 03:53:09.566797   48440 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1004 03:53:09.566809   48440 command_runner.go:130] > #      only add mounts it finds in this file.
	I1004 03:53:09.566814   48440 command_runner.go:130] > #
	I1004 03:53:09.566822   48440 command_runner.go:130] > # default_mounts_file = ""
	I1004 03:53:09.566833   48440 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1004 03:53:09.566850   48440 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1004 03:53:09.566858   48440 command_runner.go:130] > pids_limit = 1024
	I1004 03:53:09.566868   48440 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1004 03:53:09.566880   48440 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1004 03:53:09.566890   48440 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1004 03:53:09.566904   48440 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1004 03:53:09.566913   48440 command_runner.go:130] > # log_size_max = -1
	I1004 03:53:09.566925   48440 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1004 03:53:09.566934   48440 command_runner.go:130] > # log_to_journald = false
	I1004 03:53:09.566943   48440 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1004 03:53:09.566954   48440 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1004 03:53:09.566965   48440 command_runner.go:130] > # Path to directory for container attach sockets.
	I1004 03:53:09.566977   48440 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1004 03:53:09.566985   48440 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1004 03:53:09.566994   48440 command_runner.go:130] > # bind_mount_prefix = ""
	I1004 03:53:09.567003   48440 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1004 03:53:09.567012   48440 command_runner.go:130] > # read_only = false
	I1004 03:53:09.567022   48440 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1004 03:53:09.567035   48440 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1004 03:53:09.567046   48440 command_runner.go:130] > # live configuration reload.
	I1004 03:53:09.567070   48440 command_runner.go:130] > # log_level = "info"
	I1004 03:53:09.567081   48440 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1004 03:53:09.567089   48440 command_runner.go:130] > # This option supports live configuration reload.
	I1004 03:53:09.567098   48440 command_runner.go:130] > # log_filter = ""
	I1004 03:53:09.567109   48440 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1004 03:53:09.567121   48440 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1004 03:53:09.567128   48440 command_runner.go:130] > # separated by comma.
	I1004 03:53:09.567140   48440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1004 03:53:09.567149   48440 command_runner.go:130] > # uid_mappings = ""
	I1004 03:53:09.567160   48440 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1004 03:53:09.567174   48440 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1004 03:53:09.567183   48440 command_runner.go:130] > # separated by comma.
	I1004 03:53:09.567199   48440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1004 03:53:09.567208   48440 command_runner.go:130] > # gid_mappings = ""
	I1004 03:53:09.567218   48440 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1004 03:53:09.567230   48440 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1004 03:53:09.567247   48440 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1004 03:53:09.567262   48440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1004 03:53:09.567274   48440 command_runner.go:130] > # minimum_mappable_uid = -1
	I1004 03:53:09.567286   48440 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1004 03:53:09.567298   48440 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1004 03:53:09.567308   48440 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1004 03:53:09.567315   48440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1004 03:53:09.567321   48440 command_runner.go:130] > # minimum_mappable_gid = -1
	I1004 03:53:09.567327   48440 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1004 03:53:09.567342   48440 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1004 03:53:09.567353   48440 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1004 03:53:09.567361   48440 command_runner.go:130] > # ctr_stop_timeout = 30
	I1004 03:53:09.567370   48440 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1004 03:53:09.567383   48440 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1004 03:53:09.567394   48440 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1004 03:53:09.567401   48440 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1004 03:53:09.567411   48440 command_runner.go:130] > drop_infra_ctr = false
	I1004 03:53:09.567420   48440 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1004 03:53:09.567432   48440 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1004 03:53:09.567446   48440 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1004 03:53:09.567456   48440 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1004 03:53:09.567475   48440 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1004 03:53:09.567487   48440 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1004 03:53:09.567497   48440 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1004 03:53:09.567506   48440 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1004 03:53:09.567513   48440 command_runner.go:130] > # shared_cpuset = ""
	I1004 03:53:09.567527   48440 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1004 03:53:09.567538   48440 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1004 03:53:09.567547   48440 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1004 03:53:09.567561   48440 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1004 03:53:09.567570   48440 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1004 03:53:09.567579   48440 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1004 03:53:09.567599   48440 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1004 03:53:09.567610   48440 command_runner.go:130] > # enable_criu_support = false
	I1004 03:53:09.567619   48440 command_runner.go:130] > # Enable/disable the generation of the container,
	I1004 03:53:09.567635   48440 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1004 03:53:09.567645   48440 command_runner.go:130] > # enable_pod_events = false
	I1004 03:53:09.567655   48440 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1004 03:53:09.567669   48440 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1004 03:53:09.567679   48440 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1004 03:53:09.567686   48440 command_runner.go:130] > # default_runtime = "runc"
	I1004 03:53:09.567697   48440 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1004 03:53:09.567711   48440 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1004 03:53:09.567726   48440 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1004 03:53:09.567734   48440 command_runner.go:130] > # creation as a file is not desired either.
	I1004 03:53:09.567741   48440 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1004 03:53:09.567748   48440 command_runner.go:130] > # the hostname is being managed dynamically.
	I1004 03:53:09.567753   48440 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1004 03:53:09.567756   48440 command_runner.go:130] > # ]
	I1004 03:53:09.567762   48440 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1004 03:53:09.567770   48440 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1004 03:53:09.567776   48440 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1004 03:53:09.567815   48440 command_runner.go:130] > # Each entry in the table should follow the format:
	I1004 03:53:09.567820   48440 command_runner.go:130] > #
	I1004 03:53:09.567831   48440 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1004 03:53:09.567841   48440 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1004 03:53:09.567883   48440 command_runner.go:130] > # runtime_type = "oci"
	I1004 03:53:09.567890   48440 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1004 03:53:09.567895   48440 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1004 03:53:09.567906   48440 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1004 03:53:09.567914   48440 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1004 03:53:09.567917   48440 command_runner.go:130] > # monitor_env = []
	I1004 03:53:09.567922   48440 command_runner.go:130] > # privileged_without_host_devices = false
	I1004 03:53:09.567929   48440 command_runner.go:130] > # allowed_annotations = []
	I1004 03:53:09.567934   48440 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1004 03:53:09.567939   48440 command_runner.go:130] > # Where:
	I1004 03:53:09.567945   48440 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1004 03:53:09.567953   48440 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1004 03:53:09.567964   48440 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1004 03:53:09.567975   48440 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1004 03:53:09.567984   48440 command_runner.go:130] > #   in $PATH.
	I1004 03:53:09.567994   48440 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1004 03:53:09.568005   48440 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1004 03:53:09.568018   48440 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1004 03:53:09.568027   48440 command_runner.go:130] > #   state.
	I1004 03:53:09.568037   48440 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1004 03:53:09.568049   48440 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1004 03:53:09.568059   48440 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1004 03:53:09.568067   48440 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1004 03:53:09.568073   48440 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1004 03:53:09.568081   48440 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1004 03:53:09.568086   48440 command_runner.go:130] > #   The currently recognized values are:
	I1004 03:53:09.568093   48440 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1004 03:53:09.568099   48440 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1004 03:53:09.568107   48440 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1004 03:53:09.568115   48440 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1004 03:53:09.568122   48440 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1004 03:53:09.568130   48440 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1004 03:53:09.568137   48440 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1004 03:53:09.568144   48440 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1004 03:53:09.568150   48440 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1004 03:53:09.568158   48440 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1004 03:53:09.568171   48440 command_runner.go:130] > #   deprecated option "conmon".
	I1004 03:53:09.568180   48440 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1004 03:53:09.568185   48440 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1004 03:53:09.568193   48440 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1004 03:53:09.568200   48440 command_runner.go:130] > #   should be moved to the container's cgroup
	I1004 03:53:09.568206   48440 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1004 03:53:09.568213   48440 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1004 03:53:09.568221   48440 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1004 03:53:09.568228   48440 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1004 03:53:09.568231   48440 command_runner.go:130] > #
	I1004 03:53:09.568238   48440 command_runner.go:130] > # Using the seccomp notifier feature:
	I1004 03:53:09.568241   48440 command_runner.go:130] > #
	I1004 03:53:09.568247   48440 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1004 03:53:09.568256   48440 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1004 03:53:09.568259   48440 command_runner.go:130] > #
	I1004 03:53:09.568267   48440 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1004 03:53:09.568275   48440 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1004 03:53:09.568279   48440 command_runner.go:130] > #
	I1004 03:53:09.568284   48440 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1004 03:53:09.568290   48440 command_runner.go:130] > # feature.
	I1004 03:53:09.568293   48440 command_runner.go:130] > #
	I1004 03:53:09.568299   48440 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1004 03:53:09.568307   48440 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1004 03:53:09.568313   48440 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1004 03:53:09.568321   48440 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1004 03:53:09.568327   48440 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1004 03:53:09.568332   48440 command_runner.go:130] > #
	I1004 03:53:09.568337   48440 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1004 03:53:09.568345   48440 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1004 03:53:09.568348   48440 command_runner.go:130] > #
	I1004 03:53:09.568354   48440 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1004 03:53:09.568360   48440 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1004 03:53:09.568363   48440 command_runner.go:130] > #
	I1004 03:53:09.568377   48440 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1004 03:53:09.568386   48440 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1004 03:53:09.568390   48440 command_runner.go:130] > # limitation.
	I1004 03:53:09.568394   48440 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1004 03:53:09.568398   48440 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1004 03:53:09.568401   48440 command_runner.go:130] > runtime_type = "oci"
	I1004 03:53:09.568406   48440 command_runner.go:130] > runtime_root = "/run/runc"
	I1004 03:53:09.568410   48440 command_runner.go:130] > runtime_config_path = ""
	I1004 03:53:09.568415   48440 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1004 03:53:09.568420   48440 command_runner.go:130] > monitor_cgroup = "pod"
	I1004 03:53:09.568425   48440 command_runner.go:130] > monitor_exec_cgroup = ""
	I1004 03:53:09.568430   48440 command_runner.go:130] > monitor_env = [
	I1004 03:53:09.568435   48440 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1004 03:53:09.568440   48440 command_runner.go:130] > ]
	I1004 03:53:09.568444   48440 command_runner.go:130] > privileged_without_host_devices = false
	I1004 03:53:09.568454   48440 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1004 03:53:09.568459   48440 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1004 03:53:09.568471   48440 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1004 03:53:09.568481   48440 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1004 03:53:09.568488   48440 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1004 03:53:09.568496   48440 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1004 03:53:09.568507   48440 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1004 03:53:09.568515   48440 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1004 03:53:09.568520   48440 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1004 03:53:09.568527   48440 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1004 03:53:09.568530   48440 command_runner.go:130] > # Example:
	I1004 03:53:09.568534   48440 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1004 03:53:09.568539   48440 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1004 03:53:09.568543   48440 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1004 03:53:09.568547   48440 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1004 03:53:09.568551   48440 command_runner.go:130] > # cpuset = 0
	I1004 03:53:09.568555   48440 command_runner.go:130] > # cpushares = "0-1"
	I1004 03:53:09.568557   48440 command_runner.go:130] > # Where:
	I1004 03:53:09.568563   48440 command_runner.go:130] > # The workload name is workload-type.
	I1004 03:53:09.568569   48440 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1004 03:53:09.568574   48440 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1004 03:53:09.568579   48440 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1004 03:53:09.568586   48440 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1004 03:53:09.568591   48440 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1004 03:53:09.568595   48440 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1004 03:53:09.568601   48440 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1004 03:53:09.568608   48440 command_runner.go:130] > # Default value is set to true
	I1004 03:53:09.568612   48440 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1004 03:53:09.568617   48440 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1004 03:53:09.568622   48440 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1004 03:53:09.568627   48440 command_runner.go:130] > # Default value is set to 'false'
	I1004 03:53:09.568633   48440 command_runner.go:130] > # disable_hostport_mapping = false
	I1004 03:53:09.568639   48440 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1004 03:53:09.568643   48440 command_runner.go:130] > #
	I1004 03:53:09.568649   48440 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1004 03:53:09.568658   48440 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1004 03:53:09.568664   48440 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1004 03:53:09.568670   48440 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1004 03:53:09.568675   48440 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1004 03:53:09.568678   48440 command_runner.go:130] > [crio.image]
	I1004 03:53:09.568684   48440 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1004 03:53:09.568688   48440 command_runner.go:130] > # default_transport = "docker://"
	I1004 03:53:09.568696   48440 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1004 03:53:09.568701   48440 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1004 03:53:09.568705   48440 command_runner.go:130] > # global_auth_file = ""
	I1004 03:53:09.568710   48440 command_runner.go:130] > # The image used to instantiate infra containers.
	I1004 03:53:09.568714   48440 command_runner.go:130] > # This option supports live configuration reload.
	I1004 03:53:09.568718   48440 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1004 03:53:09.568724   48440 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1004 03:53:09.568730   48440 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1004 03:53:09.568734   48440 command_runner.go:130] > # This option supports live configuration reload.
	I1004 03:53:09.568744   48440 command_runner.go:130] > # pause_image_auth_file = ""
	I1004 03:53:09.568749   48440 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1004 03:53:09.568754   48440 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1004 03:53:09.568760   48440 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1004 03:53:09.568765   48440 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1004 03:53:09.568768   48440 command_runner.go:130] > # pause_command = "/pause"
	I1004 03:53:09.568774   48440 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1004 03:53:09.568779   48440 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1004 03:53:09.568783   48440 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1004 03:53:09.568789   48440 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1004 03:53:09.568794   48440 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1004 03:53:09.568799   48440 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1004 03:53:09.568802   48440 command_runner.go:130] > # pinned_images = [
	I1004 03:53:09.568806   48440 command_runner.go:130] > # ]
	I1004 03:53:09.568811   48440 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1004 03:53:09.568817   48440 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1004 03:53:09.568824   48440 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1004 03:53:09.568830   48440 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1004 03:53:09.568835   48440 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1004 03:53:09.568838   48440 command_runner.go:130] > # signature_policy = ""
	I1004 03:53:09.568843   48440 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1004 03:53:09.568852   48440 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1004 03:53:09.568858   48440 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1004 03:53:09.568864   48440 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1004 03:53:09.568869   48440 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1004 03:53:09.568877   48440 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1004 03:53:09.568882   48440 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1004 03:53:09.568892   48440 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1004 03:53:09.568896   48440 command_runner.go:130] > # changing them here.
	I1004 03:53:09.568900   48440 command_runner.go:130] > # insecure_registries = [
	I1004 03:53:09.568903   48440 command_runner.go:130] > # ]
	I1004 03:53:09.568909   48440 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1004 03:53:09.568916   48440 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1004 03:53:09.568926   48440 command_runner.go:130] > # image_volumes = "mkdir"
	I1004 03:53:09.568933   48440 command_runner.go:130] > # Temporary directory to use for storing big files
	I1004 03:53:09.568937   48440 command_runner.go:130] > # big_files_temporary_dir = ""
	I1004 03:53:09.568942   48440 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1004 03:53:09.568948   48440 command_runner.go:130] > # CNI plugins.
	I1004 03:53:09.568952   48440 command_runner.go:130] > [crio.network]
	I1004 03:53:09.568957   48440 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1004 03:53:09.568964   48440 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1004 03:53:09.568968   48440 command_runner.go:130] > # cni_default_network = ""
	I1004 03:53:09.568975   48440 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1004 03:53:09.568979   48440 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1004 03:53:09.568987   48440 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1004 03:53:09.568991   48440 command_runner.go:130] > # plugin_dirs = [
	I1004 03:53:09.568996   48440 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1004 03:53:09.568999   48440 command_runner.go:130] > # ]
	I1004 03:53:09.569005   48440 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1004 03:53:09.569011   48440 command_runner.go:130] > [crio.metrics]
	I1004 03:53:09.569015   48440 command_runner.go:130] > # Globally enable or disable metrics support.
	I1004 03:53:09.569019   48440 command_runner.go:130] > enable_metrics = true
	I1004 03:53:09.569023   48440 command_runner.go:130] > # Specify enabled metrics collectors.
	I1004 03:53:09.569030   48440 command_runner.go:130] > # Per default all metrics are enabled.
	I1004 03:53:09.569036   48440 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1004 03:53:09.569042   48440 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1004 03:53:09.569049   48440 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1004 03:53:09.569053   48440 command_runner.go:130] > # metrics_collectors = [
	I1004 03:53:09.569059   48440 command_runner.go:130] > # 	"operations",
	I1004 03:53:09.569063   48440 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1004 03:53:09.569067   48440 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1004 03:53:09.569071   48440 command_runner.go:130] > # 	"operations_errors",
	I1004 03:53:09.569075   48440 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1004 03:53:09.569083   48440 command_runner.go:130] > # 	"image_pulls_by_name",
	I1004 03:53:09.569090   48440 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1004 03:53:09.569094   48440 command_runner.go:130] > # 	"image_pulls_failures",
	I1004 03:53:09.569104   48440 command_runner.go:130] > # 	"image_pulls_successes",
	I1004 03:53:09.569109   48440 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1004 03:53:09.569112   48440 command_runner.go:130] > # 	"image_layer_reuse",
	I1004 03:53:09.569117   48440 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1004 03:53:09.569124   48440 command_runner.go:130] > # 	"containers_oom_total",
	I1004 03:53:09.569129   48440 command_runner.go:130] > # 	"containers_oom",
	I1004 03:53:09.569133   48440 command_runner.go:130] > # 	"processes_defunct",
	I1004 03:53:09.569137   48440 command_runner.go:130] > # 	"operations_total",
	I1004 03:53:09.569140   48440 command_runner.go:130] > # 	"operations_latency_seconds",
	I1004 03:53:09.569145   48440 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1004 03:53:09.569148   48440 command_runner.go:130] > # 	"operations_errors_total",
	I1004 03:53:09.569152   48440 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1004 03:53:09.569157   48440 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1004 03:53:09.569161   48440 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1004 03:53:09.569165   48440 command_runner.go:130] > # 	"image_pulls_success_total",
	I1004 03:53:09.569169   48440 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1004 03:53:09.569173   48440 command_runner.go:130] > # 	"containers_oom_count_total",
	I1004 03:53:09.569180   48440 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1004 03:53:09.569184   48440 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1004 03:53:09.569189   48440 command_runner.go:130] > # ]
	I1004 03:53:09.569194   48440 command_runner.go:130] > # The port on which the metrics server will listen.
	I1004 03:53:09.569200   48440 command_runner.go:130] > # metrics_port = 9090
	I1004 03:53:09.569205   48440 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1004 03:53:09.569211   48440 command_runner.go:130] > # metrics_socket = ""
	I1004 03:53:09.569218   48440 command_runner.go:130] > # The certificate for the secure metrics server.
	I1004 03:53:09.569226   48440 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1004 03:53:09.569232   48440 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1004 03:53:09.569237   48440 command_runner.go:130] > # certificate on any modification event.
	I1004 03:53:09.569240   48440 command_runner.go:130] > # metrics_cert = ""
	I1004 03:53:09.569245   48440 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1004 03:53:09.569252   48440 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1004 03:53:09.569255   48440 command_runner.go:130] > # metrics_key = ""
	I1004 03:53:09.569261   48440 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1004 03:53:09.569267   48440 command_runner.go:130] > [crio.tracing]
	I1004 03:53:09.569272   48440 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1004 03:53:09.569277   48440 command_runner.go:130] > # enable_tracing = false
	I1004 03:53:09.569282   48440 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1004 03:53:09.569286   48440 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1004 03:53:09.569293   48440 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1004 03:53:09.569299   48440 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1004 03:53:09.569304   48440 command_runner.go:130] > # CRI-O NRI configuration.
	I1004 03:53:09.569309   48440 command_runner.go:130] > [crio.nri]
	I1004 03:53:09.569314   48440 command_runner.go:130] > # Globally enable or disable NRI.
	I1004 03:53:09.569319   48440 command_runner.go:130] > # enable_nri = false
	I1004 03:53:09.569323   48440 command_runner.go:130] > # NRI socket to listen on.
	I1004 03:53:09.569327   48440 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1004 03:53:09.569333   48440 command_runner.go:130] > # NRI plugin directory to use.
	I1004 03:53:09.569338   48440 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1004 03:53:09.569347   48440 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1004 03:53:09.569354   48440 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1004 03:53:09.569359   48440 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1004 03:53:09.569365   48440 command_runner.go:130] > # nri_disable_connections = false
	I1004 03:53:09.569370   48440 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1004 03:53:09.569375   48440 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1004 03:53:09.569380   48440 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1004 03:53:09.569386   48440 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1004 03:53:09.569392   48440 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1004 03:53:09.569398   48440 command_runner.go:130] > [crio.stats]
	I1004 03:53:09.569403   48440 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1004 03:53:09.569410   48440 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1004 03:53:09.569414   48440 command_runner.go:130] > # stats_collection_period = 0
	I1004 03:53:09.569660   48440 command_runner.go:130] ! time="2024-10-04 03:53:09.532926383Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1004 03:53:09.569685   48440 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1004 03:53:09.569757   48440 cni.go:84] Creating CNI manager for ""
	I1004 03:53:09.569769   48440 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1004 03:53:09.569778   48440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 03:53:09.569800   48440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-355278 NodeName:multinode-355278 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 03:53:09.569937   48440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-355278"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 03:53:09.569997   48440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:53:09.580438   48440 command_runner.go:130] > kubeadm
	I1004 03:53:09.580461   48440 command_runner.go:130] > kubectl
	I1004 03:53:09.580467   48440 command_runner.go:130] > kubelet
	I1004 03:53:09.580504   48440 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 03:53:09.580563   48440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 03:53:09.590521   48440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1004 03:53:09.608031   48440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:53:09.625081   48440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1004 03:53:09.642070   48440 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I1004 03:53:09.646149   48440 command_runner.go:130] > 192.168.39.50	control-plane.minikube.internal
	I1004 03:53:09.646230   48440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:53:09.783962   48440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:53:09.799133   48440 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278 for IP: 192.168.39.50
	I1004 03:53:09.799159   48440 certs.go:194] generating shared ca certs ...
	I1004 03:53:09.799183   48440 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:53:09.799355   48440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:53:09.799410   48440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:53:09.799423   48440 certs.go:256] generating profile certs ...
	I1004 03:53:09.799509   48440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/client.key
	I1004 03:53:09.799606   48440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/apiserver.key.a40bf4c6
	I1004 03:53:09.799674   48440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/proxy-client.key
	I1004 03:53:09.799687   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:53:09.799717   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:53:09.799735   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:53:09.799757   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:53:09.799775   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:53:09.799816   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:53:09.799832   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:53:09.799847   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:53:09.799902   48440 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:53:09.799937   48440 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:53:09.799946   48440 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:53:09.799969   48440 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:53:09.799991   48440 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:53:09.800012   48440 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:53:09.800046   48440 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:53:09.800071   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:53:09.800084   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:53:09.800096   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:53:09.800655   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:53:09.825854   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:53:09.850027   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:53:09.876149   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:53:09.900789   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1004 03:53:09.925749   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 03:53:09.950235   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:53:09.975491   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:53:10.000202   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:53:10.025938   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:53:10.052344   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:53:10.077556   48440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 03:53:10.094797   48440 ssh_runner.go:195] Run: openssl version
	I1004 03:53:10.101021   48440 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1004 03:53:10.101082   48440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:53:10.112432   48440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:53:10.117158   48440 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:53:10.117336   48440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:53:10.117401   48440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:53:10.123054   48440 command_runner.go:130] > b5213941
	I1004 03:53:10.123112   48440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:53:10.132553   48440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:53:10.143743   48440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:53:10.148480   48440 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:53:10.148516   48440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:53:10.148573   48440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:53:10.155102   48440 command_runner.go:130] > 51391683
	I1004 03:53:10.155180   48440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:53:10.164713   48440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:53:10.175642   48440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:53:10.180194   48440 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:53:10.180311   48440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:53:10.180359   48440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:53:10.186171   48440 command_runner.go:130] > 3ec20f2e
	I1004 03:53:10.186261   48440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:53:10.195634   48440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:53:10.200465   48440 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:53:10.200486   48440 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1004 03:53:10.200492   48440 command_runner.go:130] > Device: 253,1	Inode: 9431080     Links: 1
	I1004 03:53:10.200499   48440 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1004 03:53:10.200508   48440 command_runner.go:130] > Access: 2024-10-04 03:46:16.286984072 +0000
	I1004 03:53:10.200514   48440 command_runner.go:130] > Modify: 2024-10-04 03:46:16.286984072 +0000
	I1004 03:53:10.200524   48440 command_runner.go:130] > Change: 2024-10-04 03:46:16.286984072 +0000
	I1004 03:53:10.200531   48440 command_runner.go:130] >  Birth: 2024-10-04 03:46:16.286984072 +0000
	I1004 03:53:10.200596   48440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 03:53:10.206382   48440 command_runner.go:130] > Certificate will not expire
	I1004 03:53:10.206451   48440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 03:53:10.212424   48440 command_runner.go:130] > Certificate will not expire
	I1004 03:53:10.212496   48440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 03:53:10.218219   48440 command_runner.go:130] > Certificate will not expire
	I1004 03:53:10.218286   48440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 03:53:10.223734   48440 command_runner.go:130] > Certificate will not expire
	I1004 03:53:10.224032   48440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 03:53:10.229781   48440 command_runner.go:130] > Certificate will not expire
	I1004 03:53:10.230052   48440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 03:53:10.236210   48440 command_runner.go:130] > Certificate will not expire
	I1004 03:53:10.236289   48440 kubeadm.go:392] StartCluster: {Name:multinode-355278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:multinode-355278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:53:10.236435   48440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 03:53:10.236482   48440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 03:53:10.278178   48440 command_runner.go:130] > 39a1ba2038e63d37b1d5f6533e8a48537d6f340aa3d13386b09dea55f6c22bde
	I1004 03:53:10.278271   48440 command_runner.go:130] > 7539f13d2609f891b7c1f281a29b0fd3ced6da7f0bb4aaec10bf7effb8ac2aec
	I1004 03:53:10.278311   48440 command_runner.go:130] > 6e7a1e7686c42fd4e684ccf5b0bb9ba22216642a608e382b7792b5b05c69b917
	I1004 03:53:10.278489   48440 command_runner.go:130] > 71f8b904bf2474edb78e656c449ae2877649b759936059875692a6a65aff51b5
	I1004 03:53:10.278551   48440 command_runner.go:130] > af880375229d67caf4e5f2f47f45f53fbe2ea8a7929ddfbed89ae712f1df9782
	I1004 03:53:10.278621   48440 command_runner.go:130] > b2c4811c6b28cad42ef132c3e4f94439f6a414a115217beca429f3f52c44a124
	I1004 03:53:10.278686   48440 command_runner.go:130] > b52fb2f1d2ee4270424d69c08b4c23e2cb78fbf86cfbe91d7fe5854543fb3a00
	I1004 03:53:10.278897   48440 command_runner.go:130] > 45cd0fd028aa821ba70f413472a0632ce6257bd3c40aa7e6498175238374a2d5
	I1004 03:53:10.278916   48440 command_runner.go:130] > 63272753e04e6a82d0e74cf60c149ce5823931f9d15dee5a6c9cad14acfbc509
	I1004 03:53:10.280492   48440 cri.go:89] found id: "39a1ba2038e63d37b1d5f6533e8a48537d6f340aa3d13386b09dea55f6c22bde"
	I1004 03:53:10.280507   48440 cri.go:89] found id: "7539f13d2609f891b7c1f281a29b0fd3ced6da7f0bb4aaec10bf7effb8ac2aec"
	I1004 03:53:10.280512   48440 cri.go:89] found id: "6e7a1e7686c42fd4e684ccf5b0bb9ba22216642a608e382b7792b5b05c69b917"
	I1004 03:53:10.280515   48440 cri.go:89] found id: "71f8b904bf2474edb78e656c449ae2877649b759936059875692a6a65aff51b5"
	I1004 03:53:10.280518   48440 cri.go:89] found id: "af880375229d67caf4e5f2f47f45f53fbe2ea8a7929ddfbed89ae712f1df9782"
	I1004 03:53:10.280524   48440 cri.go:89] found id: "b2c4811c6b28cad42ef132c3e4f94439f6a414a115217beca429f3f52c44a124"
	I1004 03:53:10.280527   48440 cri.go:89] found id: "b52fb2f1d2ee4270424d69c08b4c23e2cb78fbf86cfbe91d7fe5854543fb3a00"
	I1004 03:53:10.280530   48440 cri.go:89] found id: "45cd0fd028aa821ba70f413472a0632ce6257bd3c40aa7e6498175238374a2d5"
	I1004 03:53:10.280533   48440 cri.go:89] found id: "63272753e04e6a82d0e74cf60c149ce5823931f9d15dee5a6c9cad14acfbc509"
	I1004 03:53:10.280539   48440 cri.go:89] found id: ""
	I1004 03:53:10.280591   48440 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-355278 -n multinode-355278
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-355278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (330.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 stop
E1004 03:55:18.083517   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-355278 stop: exit status 82 (2m0.467066427s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-355278-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-355278 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 status
E1004 03:57:08.997404   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:57:15.017119   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-355278 status: (18.840518377s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-355278 status --alsologtostderr: (3.360024617s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-355278 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-355278 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-355278 -n multinode-355278
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-355278 logs -n 25: (2.157089881s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-355278 cp multinode-355278-m02:/home/docker/cp-test.txt                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278:/home/docker/cp-test_multinode-355278-m02_multinode-355278.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n multinode-355278 sudo cat                                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | /home/docker/cp-test_multinode-355278-m02_multinode-355278.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-355278 cp multinode-355278-m02:/home/docker/cp-test.txt                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m03:/home/docker/cp-test_multinode-355278-m02_multinode-355278-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n multinode-355278-m03 sudo cat                                   | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | /home/docker/cp-test_multinode-355278-m02_multinode-355278-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-355278 cp testdata/cp-test.txt                                                | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-355278 cp multinode-355278-m03:/home/docker/cp-test.txt                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile498822491/001/cp-test_multinode-355278-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-355278 cp multinode-355278-m03:/home/docker/cp-test.txt                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278:/home/docker/cp-test_multinode-355278-m03_multinode-355278.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n multinode-355278 sudo cat                                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | /home/docker/cp-test_multinode-355278-m03_multinode-355278.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-355278 cp multinode-355278-m03:/home/docker/cp-test.txt                       | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m02:/home/docker/cp-test_multinode-355278-m03_multinode-355278-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n                                                                 | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | multinode-355278-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-355278 ssh -n multinode-355278-m02 sudo cat                                   | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	|         | /home/docker/cp-test_multinode-355278-m03_multinode-355278-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-355278 node stop m03                                                          | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:48 UTC |
	| node    | multinode-355278 node start                                                             | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:48 UTC | 04 Oct 24 03:49 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-355278                                                                | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:49 UTC |                     |
	| stop    | -p multinode-355278                                                                     | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:49 UTC |                     |
	| start   | -p multinode-355278                                                                     | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:51 UTC | 04 Oct 24 03:54 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-355278                                                                | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:54 UTC |                     |
	| node    | multinode-355278 node delete                                                            | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:54 UTC | 04 Oct 24 03:54 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-355278 stop                                                                   | multinode-355278 | jenkins | v1.34.0 | 04 Oct 24 03:54 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 03:51:26
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 03:51:26.261064   48440 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:51:26.261175   48440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:51:26.261187   48440 out.go:358] Setting ErrFile to fd 2...
	I1004 03:51:26.261192   48440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:51:26.261360   48440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 03:51:26.261904   48440 out.go:352] Setting JSON to false
	I1004 03:51:26.262780   48440 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5631,"bootTime":1728008255,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 03:51:26.262867   48440 start.go:139] virtualization: kvm guest
	I1004 03:51:26.265310   48440 out.go:177] * [multinode-355278] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 03:51:26.267351   48440 notify.go:220] Checking for updates...
	I1004 03:51:26.267373   48440 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:51:26.268757   48440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:51:26.270053   48440 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:51:26.271358   48440 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:51:26.272563   48440 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 03:51:26.273922   48440 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:51:26.275577   48440 config.go:182] Loaded profile config "multinode-355278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:51:26.275676   48440 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:51:26.276299   48440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:51:26.276353   48440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:51:26.290976   48440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I1004 03:51:26.291471   48440 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:51:26.292097   48440 main.go:141] libmachine: Using API Version  1
	I1004 03:51:26.292124   48440 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:51:26.292500   48440 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:51:26.292743   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:51:26.326881   48440 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 03:51:26.328198   48440 start.go:297] selected driver: kvm2
	I1004 03:51:26.328215   48440 start.go:901] validating driver "kvm2" against &{Name:multinode-355278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:multinode-355278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:51:26.328342   48440 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:51:26.328717   48440 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:51:26.328794   48440 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 03:51:26.343048   48440 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 03:51:26.343985   48440 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:51:26.344020   48440 cni.go:84] Creating CNI manager for ""
	I1004 03:51:26.344080   48440 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1004 03:51:26.344153   48440 start.go:340] cluster config:
	{Name:multinode-355278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-355278 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:51:26.344338   48440 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:51:26.346085   48440 out.go:177] * Starting "multinode-355278" primary control-plane node in "multinode-355278" cluster
	I1004 03:51:26.347349   48440 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:51:26.347392   48440 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 03:51:26.347404   48440 cache.go:56] Caching tarball of preloaded images
	I1004 03:51:26.347491   48440 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 03:51:26.347505   48440 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 03:51:26.347690   48440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/config.json ...
	I1004 03:51:26.347911   48440 start.go:360] acquireMachinesLock for multinode-355278: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 03:51:26.347964   48440 start.go:364] duration metric: took 35.478µs to acquireMachinesLock for "multinode-355278"
	I1004 03:51:26.347978   48440 start.go:96] Skipping create...Using existing machine configuration
	I1004 03:51:26.347985   48440 fix.go:54] fixHost starting: 
	I1004 03:51:26.348227   48440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:51:26.348274   48440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:51:26.362930   48440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I1004 03:51:26.363367   48440 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:51:26.363798   48440 main.go:141] libmachine: Using API Version  1
	I1004 03:51:26.363821   48440 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:51:26.364117   48440 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:51:26.364308   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:51:26.364456   48440 main.go:141] libmachine: (multinode-355278) Calling .GetState
	I1004 03:51:26.365807   48440 fix.go:112] recreateIfNeeded on multinode-355278: state=Running err=<nil>
	W1004 03:51:26.365822   48440 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 03:51:26.367861   48440 out.go:177] * Updating the running kvm2 "multinode-355278" VM ...
	I1004 03:51:26.369300   48440 machine.go:93] provisionDockerMachine start ...
	I1004 03:51:26.369315   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:51:26.369491   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:51:26.371649   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.372089   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:51:26.372112   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.372271   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:51:26.372418   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:26.372590   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:26.372675   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:51:26.372808   48440 main.go:141] libmachine: Using SSH client type: native
	I1004 03:51:26.373020   48440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1004 03:51:26.373032   48440 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 03:51:26.477209   48440 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-355278
	
	I1004 03:51:26.477235   48440 main.go:141] libmachine: (multinode-355278) Calling .GetMachineName
	I1004 03:51:26.477493   48440 buildroot.go:166] provisioning hostname "multinode-355278"
	I1004 03:51:26.477554   48440 main.go:141] libmachine: (multinode-355278) Calling .GetMachineName
	I1004 03:51:26.477738   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:51:26.480506   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.480934   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:51:26.480966   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.481074   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:51:26.481244   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:26.481377   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:26.481510   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:51:26.481681   48440 main.go:141] libmachine: Using SSH client type: native
	I1004 03:51:26.481828   48440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1004 03:51:26.481839   48440 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-355278 && echo "multinode-355278" | sudo tee /etc/hostname
	I1004 03:51:26.599991   48440 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-355278
	
	I1004 03:51:26.600022   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:51:26.602653   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.602995   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:51:26.603026   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.603138   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:51:26.603318   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:26.603461   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:26.603592   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:51:26.603724   48440 main.go:141] libmachine: Using SSH client type: native
	I1004 03:51:26.603924   48440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1004 03:51:26.603943   48440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-355278' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-355278/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-355278' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:51:26.704494   48440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:51:26.704534   48440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 03:51:26.704550   48440 buildroot.go:174] setting up certificates
	I1004 03:51:26.704558   48440 provision.go:84] configureAuth start
	I1004 03:51:26.704566   48440 main.go:141] libmachine: (multinode-355278) Calling .GetMachineName
	I1004 03:51:26.704792   48440 main.go:141] libmachine: (multinode-355278) Calling .GetIP
	I1004 03:51:26.707494   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.707965   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:51:26.707984   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.708198   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:51:26.710305   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.710651   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:51:26.710689   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.710819   48440 provision.go:143] copyHostCerts
	I1004 03:51:26.710848   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:51:26.710890   48440 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 03:51:26.710901   48440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 03:51:26.710968   48440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 03:51:26.711053   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:51:26.711070   48440 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 03:51:26.711077   48440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 03:51:26.711104   48440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 03:51:26.711185   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:51:26.711208   48440 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 03:51:26.711214   48440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 03:51:26.711237   48440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 03:51:26.711306   48440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.multinode-355278 san=[127.0.0.1 192.168.39.50 localhost minikube multinode-355278]
	I1004 03:51:26.932325   48440 provision.go:177] copyRemoteCerts
	I1004 03:51:26.932377   48440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:51:26.932397   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:51:26.935078   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.935395   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:51:26.935427   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:26.935598   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:51:26.935798   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:26.935937   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:51:26.936075   48440 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/multinode-355278/id_rsa Username:docker}
	I1004 03:51:27.018530   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 03:51:27.018602   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 03:51:27.045600   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 03:51:27.045672   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1004 03:51:27.070523   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 03:51:27.070591   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 03:51:27.096314   48440 provision.go:87] duration metric: took 391.744632ms to configureAuth
	I1004 03:51:27.096339   48440 buildroot.go:189] setting minikube options for container-runtime
	I1004 03:51:27.096534   48440 config.go:182] Loaded profile config "multinode-355278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:51:27.096615   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:51:27.099086   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:27.099430   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:51:27.099463   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:51:27.099654   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:51:27.099838   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:27.099947   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:51:27.100083   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:51:27.100235   48440 main.go:141] libmachine: Using SSH client type: native
	I1004 03:51:27.100389   48440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1004 03:51:27.100403   48440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 03:52:57.790704   48440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 03:52:57.790738   48440 machine.go:96] duration metric: took 1m31.421425476s to provisionDockerMachine
	I1004 03:52:57.790751   48440 start.go:293] postStartSetup for "multinode-355278" (driver="kvm2")
	I1004 03:52:57.790762   48440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:52:57.790780   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:52:57.791081   48440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:52:57.791112   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:52:57.794371   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:57.794760   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:52:57.794786   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:57.794966   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:52:57.795156   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:52:57.795361   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:52:57.795582   48440 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/multinode-355278/id_rsa Username:docker}
	I1004 03:52:57.879285   48440 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:52:57.883739   48440 command_runner.go:130] > NAME=Buildroot
	I1004 03:52:57.883751   48440 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1004 03:52:57.883755   48440 command_runner.go:130] > ID=buildroot
	I1004 03:52:57.883786   48440 command_runner.go:130] > VERSION_ID=2023.02.9
	I1004 03:52:57.883795   48440 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1004 03:52:57.883874   48440 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 03:52:57.883893   48440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 03:52:57.883946   48440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 03:52:57.884013   48440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 03:52:57.884022   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /etc/ssl/certs/168792.pem
	I1004 03:52:57.884101   48440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:52:57.893854   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:52:57.917904   48440 start.go:296] duration metric: took 127.142001ms for postStartSetup
	I1004 03:52:57.917937   48440 fix.go:56] duration metric: took 1m31.569951652s for fixHost
	I1004 03:52:57.917955   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:52:57.920747   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:57.921107   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:52:57.921130   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:57.921262   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:52:57.921464   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:52:57.921624   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:52:57.921808   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:52:57.921965   48440 main.go:141] libmachine: Using SSH client type: native
	I1004 03:52:57.922150   48440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1004 03:52:57.922165   48440 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 03:52:58.024738   48440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728013978.001547172
	
	I1004 03:52:58.024755   48440 fix.go:216] guest clock: 1728013978.001547172
	I1004 03:52:58.024762   48440 fix.go:229] Guest: 2024-10-04 03:52:58.001547172 +0000 UTC Remote: 2024-10-04 03:52:57.917940758 +0000 UTC m=+91.691074504 (delta=83.606414ms)
	I1004 03:52:58.024800   48440 fix.go:200] guest clock delta is within tolerance: 83.606414ms
	I1004 03:52:58.024805   48440 start.go:83] releasing machines lock for "multinode-355278", held for 1m31.676831925s
	I1004 03:52:58.024824   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:52:58.025092   48440 main.go:141] libmachine: (multinode-355278) Calling .GetIP
	I1004 03:52:58.027746   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:58.028153   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:52:58.028178   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:58.028289   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:52:58.028724   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:52:58.028876   48440 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:52:58.028992   48440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:52:58.029039   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:52:58.029079   48440 ssh_runner.go:195] Run: cat /version.json
	I1004 03:52:58.029102   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:52:58.031775   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:58.031848   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:58.032157   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:52:58.032203   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:58.032234   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:52:58.032251   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:52:58.032308   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:52:58.032479   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:52:58.032482   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:52:58.032643   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:52:58.032656   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:52:58.032812   48440 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:52:58.032822   48440 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/multinode-355278/id_rsa Username:docker}
	I1004 03:52:58.032932   48440 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/multinode-355278/id_rsa Username:docker}
	I1004 03:52:58.132884   48440 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I1004 03:52:58.132960   48440 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1004 03:52:58.133033   48440 ssh_runner.go:195] Run: systemctl --version
	I1004 03:52:58.139007   48440 command_runner.go:130] > systemd 252 (252)
	I1004 03:52:58.139046   48440 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1004 03:52:58.139221   48440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 03:52:58.294982   48440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 03:52:58.303619   48440 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1004 03:52:58.304001   48440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 03:52:58.304076   48440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:52:58.313833   48440 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1004 03:52:58.313854   48440 start.go:495] detecting cgroup driver to use...
	I1004 03:52:58.313908   48440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 03:52:58.330972   48440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 03:52:58.346135   48440 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:52:58.346194   48440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:52:58.360002   48440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:52:58.374078   48440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:52:58.535942   48440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:52:58.671399   48440 docker.go:233] disabling docker service ...
	I1004 03:52:58.671474   48440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:52:58.692282   48440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:52:58.756406   48440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:52:58.937569   48440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:52:59.123278   48440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:52:59.138558   48440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:52:59.157588   48440 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1004 03:52:59.157629   48440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 03:52:59.157674   48440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:52:59.168003   48440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 03:52:59.168061   48440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:52:59.178608   48440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:52:59.189546   48440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:52:59.199660   48440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:52:59.210160   48440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:52:59.220504   48440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:52:59.231726   48440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 03:52:59.242076   48440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:52:59.251321   48440 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1004 03:52:59.251414   48440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:52:59.261227   48440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:52:59.401493   48440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 03:53:09.326928   48440 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.925397322s)
	I1004 03:53:09.326957   48440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 03:53:09.327024   48440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 03:53:09.332077   48440 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1004 03:53:09.332101   48440 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1004 03:53:09.332107   48440 command_runner.go:130] > Device: 0,22	Inode: 1390        Links: 1
	I1004 03:53:09.332114   48440 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1004 03:53:09.332121   48440 command_runner.go:130] > Access: 2024-10-04 03:53:09.153782493 +0000
	I1004 03:53:09.332127   48440 command_runner.go:130] > Modify: 2024-10-04 03:53:09.153782493 +0000
	I1004 03:53:09.332133   48440 command_runner.go:130] > Change: 2024-10-04 03:53:09.153782493 +0000
	I1004 03:53:09.332139   48440 command_runner.go:130] >  Birth: -
	I1004 03:53:09.332404   48440 start.go:563] Will wait 60s for crictl version
	I1004 03:53:09.332456   48440 ssh_runner.go:195] Run: which crictl
	I1004 03:53:09.336585   48440 command_runner.go:130] > /usr/bin/crictl
	I1004 03:53:09.336725   48440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:53:09.377019   48440 command_runner.go:130] > Version:  0.1.0
	I1004 03:53:09.377042   48440 command_runner.go:130] > RuntimeName:  cri-o
	I1004 03:53:09.377047   48440 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1004 03:53:09.377052   48440 command_runner.go:130] > RuntimeApiVersion:  v1
	I1004 03:53:09.377235   48440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 03:53:09.377322   48440 ssh_runner.go:195] Run: crio --version
	I1004 03:53:09.405286   48440 command_runner.go:130] > crio version 1.29.1
	I1004 03:53:09.405313   48440 command_runner.go:130] > Version:        1.29.1
	I1004 03:53:09.405319   48440 command_runner.go:130] > GitCommit:      unknown
	I1004 03:53:09.405323   48440 command_runner.go:130] > GitCommitDate:  unknown
	I1004 03:53:09.405327   48440 command_runner.go:130] > GitTreeState:   clean
	I1004 03:53:09.405337   48440 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1004 03:53:09.405342   48440 command_runner.go:130] > GoVersion:      go1.21.6
	I1004 03:53:09.405346   48440 command_runner.go:130] > Compiler:       gc
	I1004 03:53:09.405350   48440 command_runner.go:130] > Platform:       linux/amd64
	I1004 03:53:09.405354   48440 command_runner.go:130] > Linkmode:       dynamic
	I1004 03:53:09.405359   48440 command_runner.go:130] > BuildTags:      
	I1004 03:53:09.405363   48440 command_runner.go:130] >   containers_image_ostree_stub
	I1004 03:53:09.405368   48440 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1004 03:53:09.405372   48440 command_runner.go:130] >   btrfs_noversion
	I1004 03:53:09.405376   48440 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1004 03:53:09.405380   48440 command_runner.go:130] >   libdm_no_deferred_remove
	I1004 03:53:09.405385   48440 command_runner.go:130] >   seccomp
	I1004 03:53:09.405393   48440 command_runner.go:130] > LDFlags:          unknown
	I1004 03:53:09.405397   48440 command_runner.go:130] > SeccompEnabled:   true
	I1004 03:53:09.405403   48440 command_runner.go:130] > AppArmorEnabled:  false
	I1004 03:53:09.406531   48440 ssh_runner.go:195] Run: crio --version
	I1004 03:53:09.435327   48440 command_runner.go:130] > crio version 1.29.1
	I1004 03:53:09.435347   48440 command_runner.go:130] > Version:        1.29.1
	I1004 03:53:09.435353   48440 command_runner.go:130] > GitCommit:      unknown
	I1004 03:53:09.435358   48440 command_runner.go:130] > GitCommitDate:  unknown
	I1004 03:53:09.435362   48440 command_runner.go:130] > GitTreeState:   clean
	I1004 03:53:09.435367   48440 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1004 03:53:09.435371   48440 command_runner.go:130] > GoVersion:      go1.21.6
	I1004 03:53:09.435375   48440 command_runner.go:130] > Compiler:       gc
	I1004 03:53:09.435380   48440 command_runner.go:130] > Platform:       linux/amd64
	I1004 03:53:09.435384   48440 command_runner.go:130] > Linkmode:       dynamic
	I1004 03:53:09.435388   48440 command_runner.go:130] > BuildTags:      
	I1004 03:53:09.435392   48440 command_runner.go:130] >   containers_image_ostree_stub
	I1004 03:53:09.435396   48440 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1004 03:53:09.435401   48440 command_runner.go:130] >   btrfs_noversion
	I1004 03:53:09.435407   48440 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1004 03:53:09.435412   48440 command_runner.go:130] >   libdm_no_deferred_remove
	I1004 03:53:09.435417   48440 command_runner.go:130] >   seccomp
	I1004 03:53:09.435423   48440 command_runner.go:130] > LDFlags:          unknown
	I1004 03:53:09.435428   48440 command_runner.go:130] > SeccompEnabled:   true
	I1004 03:53:09.435434   48440 command_runner.go:130] > AppArmorEnabled:  false
	I1004 03:53:09.437713   48440 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 03:53:09.438991   48440 main.go:141] libmachine: (multinode-355278) Calling .GetIP
	I1004 03:53:09.441506   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:53:09.441876   48440 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:53:09.441903   48440 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:53:09.442142   48440 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 03:53:09.446358   48440 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1004 03:53:09.446534   48440 kubeadm.go:883] updating cluster {Name:multinode-355278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-355278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 03:53:09.446689   48440 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 03:53:09.446747   48440 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:53:09.486600   48440 command_runner.go:130] > {
	I1004 03:53:09.486628   48440 command_runner.go:130] >   "images": [
	I1004 03:53:09.486635   48440 command_runner.go:130] >     {
	I1004 03:53:09.486649   48440 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1004 03:53:09.486658   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.486668   48440 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1004 03:53:09.486673   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486679   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.486692   48440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1004 03:53:09.486702   48440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1004 03:53:09.486712   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486717   48440 command_runner.go:130] >       "size": "87190579",
	I1004 03:53:09.486724   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.486729   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.486738   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.486749   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.486754   48440 command_runner.go:130] >     },
	I1004 03:53:09.486759   48440 command_runner.go:130] >     {
	I1004 03:53:09.486770   48440 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1004 03:53:09.486776   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.486785   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1004 03:53:09.486791   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486798   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.486809   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1004 03:53:09.486823   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1004 03:53:09.486829   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486836   48440 command_runner.go:130] >       "size": "1363676",
	I1004 03:53:09.486842   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.486854   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.486859   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.486864   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.486867   48440 command_runner.go:130] >     },
	I1004 03:53:09.486871   48440 command_runner.go:130] >     {
	I1004 03:53:09.486876   48440 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1004 03:53:09.486883   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.486888   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1004 03:53:09.486891   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486895   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.486902   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1004 03:53:09.486910   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1004 03:53:09.486913   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486917   48440 command_runner.go:130] >       "size": "31470524",
	I1004 03:53:09.486921   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.486925   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.486929   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.486933   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.486938   48440 command_runner.go:130] >     },
	I1004 03:53:09.486941   48440 command_runner.go:130] >     {
	I1004 03:53:09.486947   48440 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1004 03:53:09.486952   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.486956   48440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1004 03:53:09.486959   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486963   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.486972   48440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1004 03:53:09.486982   48440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1004 03:53:09.486987   48440 command_runner.go:130] >       ],
	I1004 03:53:09.486991   48440 command_runner.go:130] >       "size": "63273227",
	I1004 03:53:09.486994   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.486998   48440 command_runner.go:130] >       "username": "nonroot",
	I1004 03:53:09.487002   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.487006   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.487011   48440 command_runner.go:130] >     },
	I1004 03:53:09.487016   48440 command_runner.go:130] >     {
	I1004 03:53:09.487022   48440 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1004 03:53:09.487025   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.487029   48440 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1004 03:53:09.487033   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487037   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.487043   48440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1004 03:53:09.487051   48440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1004 03:53:09.487065   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487069   48440 command_runner.go:130] >       "size": "149009664",
	I1004 03:53:09.487072   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.487076   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.487079   48440 command_runner.go:130] >       },
	I1004 03:53:09.487083   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.487086   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.487090   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.487094   48440 command_runner.go:130] >     },
	I1004 03:53:09.487097   48440 command_runner.go:130] >     {
	I1004 03:53:09.487103   48440 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1004 03:53:09.487107   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.487112   48440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1004 03:53:09.487116   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487119   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.487126   48440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1004 03:53:09.487134   48440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1004 03:53:09.487137   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487141   48440 command_runner.go:130] >       "size": "95237600",
	I1004 03:53:09.487147   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.487151   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.487154   48440 command_runner.go:130] >       },
	I1004 03:53:09.487158   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.487163   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.487170   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.487174   48440 command_runner.go:130] >     },
	I1004 03:53:09.487178   48440 command_runner.go:130] >     {
	I1004 03:53:09.487183   48440 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1004 03:53:09.487193   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.487200   48440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1004 03:53:09.487204   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487210   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.487218   48440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1004 03:53:09.487227   48440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1004 03:53:09.487231   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487235   48440 command_runner.go:130] >       "size": "89437508",
	I1004 03:53:09.487239   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.487242   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.487246   48440 command_runner.go:130] >       },
	I1004 03:53:09.487250   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.487255   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.487259   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.487262   48440 command_runner.go:130] >     },
	I1004 03:53:09.487266   48440 command_runner.go:130] >     {
	I1004 03:53:09.487272   48440 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1004 03:53:09.487277   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.487282   48440 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1004 03:53:09.487288   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487292   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.487306   48440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1004 03:53:09.487316   48440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1004 03:53:09.487321   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487327   48440 command_runner.go:130] >       "size": "92733849",
	I1004 03:53:09.487331   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.487335   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.487341   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.487345   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.487351   48440 command_runner.go:130] >     },
	I1004 03:53:09.487354   48440 command_runner.go:130] >     {
	I1004 03:53:09.487360   48440 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1004 03:53:09.487364   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.487369   48440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1004 03:53:09.487372   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487375   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.487393   48440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1004 03:53:09.487400   48440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1004 03:53:09.487403   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487407   48440 command_runner.go:130] >       "size": "68420934",
	I1004 03:53:09.487410   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.487414   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.487418   48440 command_runner.go:130] >       },
	I1004 03:53:09.487421   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.487425   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.487429   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.487432   48440 command_runner.go:130] >     },
	I1004 03:53:09.487435   48440 command_runner.go:130] >     {
	I1004 03:53:09.487442   48440 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1004 03:53:09.487446   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.487450   48440 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1004 03:53:09.487453   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487457   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.487463   48440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1004 03:53:09.487469   48440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1004 03:53:09.487472   48440 command_runner.go:130] >       ],
	I1004 03:53:09.487475   48440 command_runner.go:130] >       "size": "742080",
	I1004 03:53:09.487479   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.487483   48440 command_runner.go:130] >         "value": "65535"
	I1004 03:53:09.487486   48440 command_runner.go:130] >       },
	I1004 03:53:09.487490   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.487493   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.487498   48440 command_runner.go:130] >       "pinned": true
	I1004 03:53:09.487504   48440 command_runner.go:130] >     }
	I1004 03:53:09.487507   48440 command_runner.go:130] >   ]
	I1004 03:53:09.487510   48440 command_runner.go:130] > }
	I1004 03:53:09.487675   48440 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:53:09.487689   48440 crio.go:433] Images already preloaded, skipping extraction
	I1004 03:53:09.487742   48440 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:53:09.520592   48440 command_runner.go:130] > {
	I1004 03:53:09.520620   48440 command_runner.go:130] >   "images": [
	I1004 03:53:09.520626   48440 command_runner.go:130] >     {
	I1004 03:53:09.520637   48440 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1004 03:53:09.520645   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.520658   48440 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1004 03:53:09.520663   48440 command_runner.go:130] >       ],
	I1004 03:53:09.520669   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.520681   48440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1004 03:53:09.520692   48440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1004 03:53:09.520702   48440 command_runner.go:130] >       ],
	I1004 03:53:09.520710   48440 command_runner.go:130] >       "size": "87190579",
	I1004 03:53:09.520716   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.520726   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.520749   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.520760   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.520766   48440 command_runner.go:130] >     },
	I1004 03:53:09.520772   48440 command_runner.go:130] >     {
	I1004 03:53:09.520781   48440 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1004 03:53:09.520790   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.520798   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1004 03:53:09.520804   48440 command_runner.go:130] >       ],
	I1004 03:53:09.520811   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.520819   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1004 03:53:09.520828   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1004 03:53:09.520832   48440 command_runner.go:130] >       ],
	I1004 03:53:09.520836   48440 command_runner.go:130] >       "size": "1363676",
	I1004 03:53:09.520842   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.520848   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.520851   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.520855   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.520859   48440 command_runner.go:130] >     },
	I1004 03:53:09.520862   48440 command_runner.go:130] >     {
	I1004 03:53:09.520867   48440 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1004 03:53:09.520874   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.520879   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1004 03:53:09.520885   48440 command_runner.go:130] >       ],
	I1004 03:53:09.520889   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.520897   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1004 03:53:09.520906   48440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1004 03:53:09.520909   48440 command_runner.go:130] >       ],
	I1004 03:53:09.520914   48440 command_runner.go:130] >       "size": "31470524",
	I1004 03:53:09.520917   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.520921   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.520925   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.520928   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.520932   48440 command_runner.go:130] >     },
	I1004 03:53:09.520935   48440 command_runner.go:130] >     {
	I1004 03:53:09.520941   48440 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1004 03:53:09.520947   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.520951   48440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1004 03:53:09.520955   48440 command_runner.go:130] >       ],
	I1004 03:53:09.520959   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.520972   48440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1004 03:53:09.520990   48440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1004 03:53:09.520999   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521007   48440 command_runner.go:130] >       "size": "63273227",
	I1004 03:53:09.521014   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.521019   48440 command_runner.go:130] >       "username": "nonroot",
	I1004 03:53:09.521028   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.521032   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.521035   48440 command_runner.go:130] >     },
	I1004 03:53:09.521039   48440 command_runner.go:130] >     {
	I1004 03:53:09.521045   48440 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1004 03:53:09.521051   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.521055   48440 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1004 03:53:09.521061   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521065   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.521071   48440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1004 03:53:09.521079   48440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1004 03:53:09.521083   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521088   48440 command_runner.go:130] >       "size": "149009664",
	I1004 03:53:09.521093   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.521097   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.521101   48440 command_runner.go:130] >       },
	I1004 03:53:09.521106   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.521110   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.521116   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.521120   48440 command_runner.go:130] >     },
	I1004 03:53:09.521128   48440 command_runner.go:130] >     {
	I1004 03:53:09.521137   48440 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1004 03:53:09.521141   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.521146   48440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1004 03:53:09.521150   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521154   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.521163   48440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1004 03:53:09.521170   48440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1004 03:53:09.521176   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521180   48440 command_runner.go:130] >       "size": "95237600",
	I1004 03:53:09.521184   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.521187   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.521191   48440 command_runner.go:130] >       },
	I1004 03:53:09.521195   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.521200   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.521209   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.521214   48440 command_runner.go:130] >     },
	I1004 03:53:09.521218   48440 command_runner.go:130] >     {
	I1004 03:53:09.521224   48440 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1004 03:53:09.521231   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.521236   48440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1004 03:53:09.521242   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521246   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.521253   48440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1004 03:53:09.521263   48440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1004 03:53:09.521270   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521279   48440 command_runner.go:130] >       "size": "89437508",
	I1004 03:53:09.521284   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.521291   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.521299   48440 command_runner.go:130] >       },
	I1004 03:53:09.521304   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.521312   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.521318   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.521326   48440 command_runner.go:130] >     },
	I1004 03:53:09.521331   48440 command_runner.go:130] >     {
	I1004 03:53:09.521341   48440 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1004 03:53:09.521350   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.521357   48440 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1004 03:53:09.521365   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521371   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.521394   48440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1004 03:53:09.521410   48440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1004 03:53:09.521417   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521424   48440 command_runner.go:130] >       "size": "92733849",
	I1004 03:53:09.521432   48440 command_runner.go:130] >       "uid": null,
	I1004 03:53:09.521436   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.521443   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.521449   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.521457   48440 command_runner.go:130] >     },
	I1004 03:53:09.521464   48440 command_runner.go:130] >     {
	I1004 03:53:09.521477   48440 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1004 03:53:09.521486   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.521494   48440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1004 03:53:09.521502   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521509   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.521524   48440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1004 03:53:09.521541   48440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1004 03:53:09.521548   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521553   48440 command_runner.go:130] >       "size": "68420934",
	I1004 03:53:09.521559   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.521563   48440 command_runner.go:130] >         "value": "0"
	I1004 03:53:09.521568   48440 command_runner.go:130] >       },
	I1004 03:53:09.521575   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.521580   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.521590   48440 command_runner.go:130] >       "pinned": false
	I1004 03:53:09.521595   48440 command_runner.go:130] >     },
	I1004 03:53:09.521603   48440 command_runner.go:130] >     {
	I1004 03:53:09.521612   48440 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1004 03:53:09.521621   48440 command_runner.go:130] >       "repoTags": [
	I1004 03:53:09.521627   48440 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1004 03:53:09.521633   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521642   48440 command_runner.go:130] >       "repoDigests": [
	I1004 03:53:09.521652   48440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1004 03:53:09.521673   48440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1004 03:53:09.521682   48440 command_runner.go:130] >       ],
	I1004 03:53:09.521689   48440 command_runner.go:130] >       "size": "742080",
	I1004 03:53:09.521697   48440 command_runner.go:130] >       "uid": {
	I1004 03:53:09.521704   48440 command_runner.go:130] >         "value": "65535"
	I1004 03:53:09.521709   48440 command_runner.go:130] >       },
	I1004 03:53:09.521718   48440 command_runner.go:130] >       "username": "",
	I1004 03:53:09.521724   48440 command_runner.go:130] >       "spec": null,
	I1004 03:53:09.521734   48440 command_runner.go:130] >       "pinned": true
	I1004 03:53:09.521742   48440 command_runner.go:130] >     }
	I1004 03:53:09.521748   48440 command_runner.go:130] >   ]
	I1004 03:53:09.521755   48440 command_runner.go:130] > }
	I1004 03:53:09.521877   48440 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 03:53:09.521887   48440 cache_images.go:84] Images are preloaded, skipping loading
	I1004 03:53:09.521894   48440 kubeadm.go:934] updating node { 192.168.39.50 8443 v1.31.1 crio true true} ...
	I1004 03:53:09.521981   48440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-355278 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-355278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:53:09.522042   48440 ssh_runner.go:195] Run: crio config
	I1004 03:53:09.564316   48440 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1004 03:53:09.564348   48440 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1004 03:53:09.564359   48440 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1004 03:53:09.564363   48440 command_runner.go:130] > #
	I1004 03:53:09.564374   48440 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1004 03:53:09.564383   48440 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1004 03:53:09.564394   48440 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1004 03:53:09.564405   48440 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1004 03:53:09.564412   48440 command_runner.go:130] > # reload'.
	I1004 03:53:09.564422   48440 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1004 03:53:09.564434   48440 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1004 03:53:09.564446   48440 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1004 03:53:09.564457   48440 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1004 03:53:09.564463   48440 command_runner.go:130] > [crio]
	I1004 03:53:09.564473   48440 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1004 03:53:09.564484   48440 command_runner.go:130] > # containers images, in this directory.
	I1004 03:53:09.564491   48440 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1004 03:53:09.564519   48440 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1004 03:53:09.564531   48440 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1004 03:53:09.564543   48440 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1004 03:53:09.564866   48440 command_runner.go:130] > # imagestore = ""
	I1004 03:53:09.564890   48440 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1004 03:53:09.564900   48440 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1004 03:53:09.564908   48440 command_runner.go:130] > storage_driver = "overlay"
	I1004 03:53:09.564917   48440 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1004 03:53:09.564935   48440 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1004 03:53:09.564940   48440 command_runner.go:130] > storage_option = [
	I1004 03:53:09.564949   48440 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1004 03:53:09.565030   48440 command_runner.go:130] > ]
	I1004 03:53:09.565053   48440 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1004 03:53:09.565063   48440 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1004 03:53:09.565074   48440 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1004 03:53:09.565084   48440 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1004 03:53:09.565093   48440 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1004 03:53:09.565100   48440 command_runner.go:130] > # always happen on a node reboot
	I1004 03:53:09.565111   48440 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1004 03:53:09.565131   48440 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1004 03:53:09.565143   48440 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1004 03:53:09.565153   48440 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1004 03:53:09.565164   48440 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1004 03:53:09.565178   48440 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1004 03:53:09.565193   48440 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1004 03:53:09.565205   48440 command_runner.go:130] > # internal_wipe = true
	I1004 03:53:09.565222   48440 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1004 03:53:09.565233   48440 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1004 03:53:09.565243   48440 command_runner.go:130] > # internal_repair = false
	I1004 03:53:09.565252   48440 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1004 03:53:09.565264   48440 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1004 03:53:09.565273   48440 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1004 03:53:09.565284   48440 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1004 03:53:09.565296   48440 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1004 03:53:09.565305   48440 command_runner.go:130] > [crio.api]
	I1004 03:53:09.565313   48440 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1004 03:53:09.565324   48440 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1004 03:53:09.565342   48440 command_runner.go:130] > # IP address on which the stream server will listen.
	I1004 03:53:09.565352   48440 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1004 03:53:09.565360   48440 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1004 03:53:09.565366   48440 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1004 03:53:09.565370   48440 command_runner.go:130] > # stream_port = "0"
	I1004 03:53:09.565375   48440 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1004 03:53:09.565381   48440 command_runner.go:130] > # stream_enable_tls = false
	I1004 03:53:09.565387   48440 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1004 03:53:09.565395   48440 command_runner.go:130] > # stream_idle_timeout = ""
	I1004 03:53:09.565405   48440 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1004 03:53:09.565419   48440 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1004 03:53:09.565428   48440 command_runner.go:130] > # minutes.
	I1004 03:53:09.565435   48440 command_runner.go:130] > # stream_tls_cert = ""
	I1004 03:53:09.565446   48440 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1004 03:53:09.565458   48440 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1004 03:53:09.565471   48440 command_runner.go:130] > # stream_tls_key = ""
	I1004 03:53:09.565486   48440 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1004 03:53:09.565496   48440 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1004 03:53:09.565511   48440 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1004 03:53:09.565520   48440 command_runner.go:130] > # stream_tls_ca = ""
	I1004 03:53:09.565534   48440 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1004 03:53:09.565545   48440 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1004 03:53:09.565558   48440 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1004 03:53:09.565568   48440 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1004 03:53:09.565578   48440 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1004 03:53:09.565589   48440 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1004 03:53:09.565598   48440 command_runner.go:130] > [crio.runtime]
	I1004 03:53:09.565609   48440 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1004 03:53:09.565620   48440 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1004 03:53:09.565628   48440 command_runner.go:130] > # "nofile=1024:2048"
	I1004 03:53:09.565639   48440 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1004 03:53:09.565648   48440 command_runner.go:130] > # default_ulimits = [
	I1004 03:53:09.565653   48440 command_runner.go:130] > # ]
	I1004 03:53:09.565662   48440 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1004 03:53:09.565669   48440 command_runner.go:130] > # no_pivot = false
	I1004 03:53:09.565681   48440 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1004 03:53:09.565693   48440 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1004 03:53:09.565706   48440 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1004 03:53:09.565722   48440 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1004 03:53:09.565730   48440 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1004 03:53:09.565736   48440 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1004 03:53:09.565743   48440 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1004 03:53:09.565747   48440 command_runner.go:130] > # Cgroup setting for conmon
	I1004 03:53:09.565756   48440 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1004 03:53:09.565759   48440 command_runner.go:130] > conmon_cgroup = "pod"
	I1004 03:53:09.565767   48440 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1004 03:53:09.565772   48440 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1004 03:53:09.565780   48440 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1004 03:53:09.565783   48440 command_runner.go:130] > conmon_env = [
	I1004 03:53:09.565794   48440 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1004 03:53:09.565799   48440 command_runner.go:130] > ]
	I1004 03:53:09.565807   48440 command_runner.go:130] > # Additional environment variables to set for all the
	I1004 03:53:09.565815   48440 command_runner.go:130] > # containers. These are overridden if set in the
	I1004 03:53:09.565827   48440 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1004 03:53:09.565836   48440 command_runner.go:130] > # default_env = [
	I1004 03:53:09.565844   48440 command_runner.go:130] > # ]
	I1004 03:53:09.565856   48440 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1004 03:53:09.565868   48440 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1004 03:53:09.565877   48440 command_runner.go:130] > # selinux = false
	I1004 03:53:09.565887   48440 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1004 03:53:09.565900   48440 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1004 03:53:09.565912   48440 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1004 03:53:09.565920   48440 command_runner.go:130] > # seccomp_profile = ""
	I1004 03:53:09.565929   48440 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1004 03:53:09.565945   48440 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1004 03:53:09.565959   48440 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1004 03:53:09.565969   48440 command_runner.go:130] > # which might increase security.
	I1004 03:53:09.565978   48440 command_runner.go:130] > # This option is currently deprecated,
	I1004 03:53:09.565989   48440 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1004 03:53:09.565998   48440 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1004 03:53:09.566011   48440 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1004 03:53:09.566022   48440 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1004 03:53:09.566033   48440 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1004 03:53:09.566044   48440 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1004 03:53:09.566054   48440 command_runner.go:130] > # This option supports live configuration reload.
	I1004 03:53:09.566065   48440 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1004 03:53:09.566077   48440 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1004 03:53:09.566088   48440 command_runner.go:130] > # the cgroup blockio controller.
	I1004 03:53:09.566095   48440 command_runner.go:130] > # blockio_config_file = ""
	I1004 03:53:09.566106   48440 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1004 03:53:09.566115   48440 command_runner.go:130] > # blockio parameters.
	I1004 03:53:09.566121   48440 command_runner.go:130] > # blockio_reload = false
	I1004 03:53:09.566131   48440 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1004 03:53:09.566141   48440 command_runner.go:130] > # irqbalance daemon.
	I1004 03:53:09.566149   48440 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1004 03:53:09.566162   48440 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1004 03:53:09.566175   48440 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1004 03:53:09.566188   48440 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1004 03:53:09.566202   48440 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1004 03:53:09.566215   48440 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1004 03:53:09.566226   48440 command_runner.go:130] > # This option supports live configuration reload.
	I1004 03:53:09.566233   48440 command_runner.go:130] > # rdt_config_file = ""
	I1004 03:53:09.566244   48440 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1004 03:53:09.566253   48440 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1004 03:53:09.566272   48440 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1004 03:53:09.566282   48440 command_runner.go:130] > # separate_pull_cgroup = ""
	I1004 03:53:09.566292   48440 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1004 03:53:09.566304   48440 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1004 03:53:09.566310   48440 command_runner.go:130] > # will be added.
	I1004 03:53:09.566319   48440 command_runner.go:130] > # default_capabilities = [
	I1004 03:53:09.566325   48440 command_runner.go:130] > # 	"CHOWN",
	I1004 03:53:09.566334   48440 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1004 03:53:09.566340   48440 command_runner.go:130] > # 	"FSETID",
	I1004 03:53:09.566349   48440 command_runner.go:130] > # 	"FOWNER",
	I1004 03:53:09.566357   48440 command_runner.go:130] > # 	"SETGID",
	I1004 03:53:09.566365   48440 command_runner.go:130] > # 	"SETUID",
	I1004 03:53:09.566371   48440 command_runner.go:130] > # 	"SETPCAP",
	I1004 03:53:09.566380   48440 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1004 03:53:09.566386   48440 command_runner.go:130] > # 	"KILL",
	I1004 03:53:09.566394   48440 command_runner.go:130] > # ]
	I1004 03:53:09.566405   48440 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1004 03:53:09.566419   48440 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1004 03:53:09.566429   48440 command_runner.go:130] > # add_inheritable_capabilities = false
	I1004 03:53:09.566438   48440 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1004 03:53:09.566449   48440 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1004 03:53:09.566455   48440 command_runner.go:130] > default_sysctls = [
	I1004 03:53:09.566481   48440 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1004 03:53:09.566491   48440 command_runner.go:130] > ]
	I1004 03:53:09.566499   48440 command_runner.go:130] > # List of devices on the host that a
	I1004 03:53:09.566511   48440 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1004 03:53:09.566519   48440 command_runner.go:130] > # allowed_devices = [
	I1004 03:53:09.566530   48440 command_runner.go:130] > # 	"/dev/fuse",
	I1004 03:53:09.566535   48440 command_runner.go:130] > # ]
	I1004 03:53:09.566546   48440 command_runner.go:130] > # List of additional devices. specified as
	I1004 03:53:09.566557   48440 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1004 03:53:09.566568   48440 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1004 03:53:09.566578   48440 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1004 03:53:09.566587   48440 command_runner.go:130] > # additional_devices = [
	I1004 03:53:09.566592   48440 command_runner.go:130] > # ]
	I1004 03:53:09.566602   48440 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1004 03:53:09.566612   48440 command_runner.go:130] > # cdi_spec_dirs = [
	I1004 03:53:09.566618   48440 command_runner.go:130] > # 	"/etc/cdi",
	I1004 03:53:09.566627   48440 command_runner.go:130] > # 	"/var/run/cdi",
	I1004 03:53:09.566633   48440 command_runner.go:130] > # ]
	I1004 03:53:09.566645   48440 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1004 03:53:09.566657   48440 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1004 03:53:09.566667   48440 command_runner.go:130] > # Defaults to false.
	I1004 03:53:09.566676   48440 command_runner.go:130] > # device_ownership_from_security_context = false
	I1004 03:53:09.566688   48440 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1004 03:53:09.566697   48440 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1004 03:53:09.566706   48440 command_runner.go:130] > # hooks_dir = [
	I1004 03:53:09.566713   48440 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1004 03:53:09.566719   48440 command_runner.go:130] > # ]
	I1004 03:53:09.566725   48440 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1004 03:53:09.566732   48440 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1004 03:53:09.566737   48440 command_runner.go:130] > # its default mounts from the following two files:
	I1004 03:53:09.566739   48440 command_runner.go:130] > #
	I1004 03:53:09.566745   48440 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1004 03:53:09.566754   48440 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1004 03:53:09.566759   48440 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1004 03:53:09.566765   48440 command_runner.go:130] > #
	I1004 03:53:09.566774   48440 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1004 03:53:09.566787   48440 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1004 03:53:09.566797   48440 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1004 03:53:09.566809   48440 command_runner.go:130] > #      only add mounts it finds in this file.
	I1004 03:53:09.566814   48440 command_runner.go:130] > #
	I1004 03:53:09.566822   48440 command_runner.go:130] > # default_mounts_file = ""
	I1004 03:53:09.566833   48440 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1004 03:53:09.566850   48440 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1004 03:53:09.566858   48440 command_runner.go:130] > pids_limit = 1024
	I1004 03:53:09.566868   48440 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1004 03:53:09.566880   48440 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1004 03:53:09.566890   48440 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1004 03:53:09.566904   48440 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1004 03:53:09.566913   48440 command_runner.go:130] > # log_size_max = -1
	I1004 03:53:09.566925   48440 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1004 03:53:09.566934   48440 command_runner.go:130] > # log_to_journald = false
	I1004 03:53:09.566943   48440 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1004 03:53:09.566954   48440 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1004 03:53:09.566965   48440 command_runner.go:130] > # Path to directory for container attach sockets.
	I1004 03:53:09.566977   48440 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1004 03:53:09.566985   48440 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1004 03:53:09.566994   48440 command_runner.go:130] > # bind_mount_prefix = ""
	I1004 03:53:09.567003   48440 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1004 03:53:09.567012   48440 command_runner.go:130] > # read_only = false
	I1004 03:53:09.567022   48440 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1004 03:53:09.567035   48440 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1004 03:53:09.567046   48440 command_runner.go:130] > # live configuration reload.
	I1004 03:53:09.567070   48440 command_runner.go:130] > # log_level = "info"
	I1004 03:53:09.567081   48440 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1004 03:53:09.567089   48440 command_runner.go:130] > # This option supports live configuration reload.
	I1004 03:53:09.567098   48440 command_runner.go:130] > # log_filter = ""
	I1004 03:53:09.567109   48440 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1004 03:53:09.567121   48440 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1004 03:53:09.567128   48440 command_runner.go:130] > # separated by comma.
	I1004 03:53:09.567140   48440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1004 03:53:09.567149   48440 command_runner.go:130] > # uid_mappings = ""
	I1004 03:53:09.567160   48440 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1004 03:53:09.567174   48440 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1004 03:53:09.567183   48440 command_runner.go:130] > # separated by comma.
	I1004 03:53:09.567199   48440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1004 03:53:09.567208   48440 command_runner.go:130] > # gid_mappings = ""
	I1004 03:53:09.567218   48440 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1004 03:53:09.567230   48440 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1004 03:53:09.567247   48440 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1004 03:53:09.567262   48440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1004 03:53:09.567274   48440 command_runner.go:130] > # minimum_mappable_uid = -1
	I1004 03:53:09.567286   48440 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1004 03:53:09.567298   48440 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1004 03:53:09.567308   48440 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1004 03:53:09.567315   48440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1004 03:53:09.567321   48440 command_runner.go:130] > # minimum_mappable_gid = -1
	I1004 03:53:09.567327   48440 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1004 03:53:09.567342   48440 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1004 03:53:09.567353   48440 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1004 03:53:09.567361   48440 command_runner.go:130] > # ctr_stop_timeout = 30
	I1004 03:53:09.567370   48440 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1004 03:53:09.567383   48440 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1004 03:53:09.567394   48440 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1004 03:53:09.567401   48440 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1004 03:53:09.567411   48440 command_runner.go:130] > drop_infra_ctr = false
	I1004 03:53:09.567420   48440 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1004 03:53:09.567432   48440 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1004 03:53:09.567446   48440 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1004 03:53:09.567456   48440 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1004 03:53:09.567475   48440 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1004 03:53:09.567487   48440 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1004 03:53:09.567497   48440 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1004 03:53:09.567506   48440 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1004 03:53:09.567513   48440 command_runner.go:130] > # shared_cpuset = ""
	I1004 03:53:09.567527   48440 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1004 03:53:09.567538   48440 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1004 03:53:09.567547   48440 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1004 03:53:09.567561   48440 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1004 03:53:09.567570   48440 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1004 03:53:09.567579   48440 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1004 03:53:09.567599   48440 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1004 03:53:09.567610   48440 command_runner.go:130] > # enable_criu_support = false
	I1004 03:53:09.567619   48440 command_runner.go:130] > # Enable/disable the generation of the container,
	I1004 03:53:09.567635   48440 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1004 03:53:09.567645   48440 command_runner.go:130] > # enable_pod_events = false
	I1004 03:53:09.567655   48440 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1004 03:53:09.567669   48440 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1004 03:53:09.567679   48440 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1004 03:53:09.567686   48440 command_runner.go:130] > # default_runtime = "runc"
	I1004 03:53:09.567697   48440 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1004 03:53:09.567711   48440 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1004 03:53:09.567726   48440 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1004 03:53:09.567734   48440 command_runner.go:130] > # creation as a file is not desired either.
	I1004 03:53:09.567741   48440 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1004 03:53:09.567748   48440 command_runner.go:130] > # the hostname is being managed dynamically.
	I1004 03:53:09.567753   48440 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1004 03:53:09.567756   48440 command_runner.go:130] > # ]
	I1004 03:53:09.567762   48440 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1004 03:53:09.567770   48440 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1004 03:53:09.567776   48440 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1004 03:53:09.567815   48440 command_runner.go:130] > # Each entry in the table should follow the format:
	I1004 03:53:09.567820   48440 command_runner.go:130] > #
	I1004 03:53:09.567831   48440 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1004 03:53:09.567841   48440 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1004 03:53:09.567883   48440 command_runner.go:130] > # runtime_type = "oci"
	I1004 03:53:09.567890   48440 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1004 03:53:09.567895   48440 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1004 03:53:09.567906   48440 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1004 03:53:09.567914   48440 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1004 03:53:09.567917   48440 command_runner.go:130] > # monitor_env = []
	I1004 03:53:09.567922   48440 command_runner.go:130] > # privileged_without_host_devices = false
	I1004 03:53:09.567929   48440 command_runner.go:130] > # allowed_annotations = []
	I1004 03:53:09.567934   48440 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1004 03:53:09.567939   48440 command_runner.go:130] > # Where:
	I1004 03:53:09.567945   48440 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1004 03:53:09.567953   48440 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1004 03:53:09.567964   48440 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1004 03:53:09.567975   48440 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1004 03:53:09.567984   48440 command_runner.go:130] > #   in $PATH.
	I1004 03:53:09.567994   48440 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1004 03:53:09.568005   48440 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1004 03:53:09.568018   48440 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1004 03:53:09.568027   48440 command_runner.go:130] > #   state.
	I1004 03:53:09.568037   48440 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1004 03:53:09.568049   48440 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1004 03:53:09.568059   48440 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1004 03:53:09.568067   48440 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1004 03:53:09.568073   48440 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1004 03:53:09.568081   48440 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1004 03:53:09.568086   48440 command_runner.go:130] > #   The currently recognized values are:
	I1004 03:53:09.568093   48440 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1004 03:53:09.568099   48440 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1004 03:53:09.568107   48440 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1004 03:53:09.568115   48440 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1004 03:53:09.568122   48440 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1004 03:53:09.568130   48440 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1004 03:53:09.568137   48440 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1004 03:53:09.568144   48440 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1004 03:53:09.568150   48440 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1004 03:53:09.568158   48440 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1004 03:53:09.568171   48440 command_runner.go:130] > #   deprecated option "conmon".
	I1004 03:53:09.568180   48440 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1004 03:53:09.568185   48440 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1004 03:53:09.568193   48440 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1004 03:53:09.568200   48440 command_runner.go:130] > #   should be moved to the container's cgroup
	I1004 03:53:09.568206   48440 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1004 03:53:09.568213   48440 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1004 03:53:09.568221   48440 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1004 03:53:09.568228   48440 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1004 03:53:09.568231   48440 command_runner.go:130] > #
	I1004 03:53:09.568238   48440 command_runner.go:130] > # Using the seccomp notifier feature:
	I1004 03:53:09.568241   48440 command_runner.go:130] > #
	I1004 03:53:09.568247   48440 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1004 03:53:09.568256   48440 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1004 03:53:09.568259   48440 command_runner.go:130] > #
	I1004 03:53:09.568267   48440 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1004 03:53:09.568275   48440 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1004 03:53:09.568279   48440 command_runner.go:130] > #
	I1004 03:53:09.568284   48440 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1004 03:53:09.568290   48440 command_runner.go:130] > # feature.
	I1004 03:53:09.568293   48440 command_runner.go:130] > #
	I1004 03:53:09.568299   48440 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1004 03:53:09.568307   48440 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1004 03:53:09.568313   48440 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1004 03:53:09.568321   48440 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1004 03:53:09.568327   48440 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1004 03:53:09.568332   48440 command_runner.go:130] > #
	I1004 03:53:09.568337   48440 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1004 03:53:09.568345   48440 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1004 03:53:09.568348   48440 command_runner.go:130] > #
	I1004 03:53:09.568354   48440 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1004 03:53:09.568360   48440 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1004 03:53:09.568363   48440 command_runner.go:130] > #
	I1004 03:53:09.568377   48440 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1004 03:53:09.568386   48440 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1004 03:53:09.568390   48440 command_runner.go:130] > # limitation.
	I1004 03:53:09.568394   48440 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1004 03:53:09.568398   48440 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1004 03:53:09.568401   48440 command_runner.go:130] > runtime_type = "oci"
	I1004 03:53:09.568406   48440 command_runner.go:130] > runtime_root = "/run/runc"
	I1004 03:53:09.568410   48440 command_runner.go:130] > runtime_config_path = ""
	I1004 03:53:09.568415   48440 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1004 03:53:09.568420   48440 command_runner.go:130] > monitor_cgroup = "pod"
	I1004 03:53:09.568425   48440 command_runner.go:130] > monitor_exec_cgroup = ""
	I1004 03:53:09.568430   48440 command_runner.go:130] > monitor_env = [
	I1004 03:53:09.568435   48440 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1004 03:53:09.568440   48440 command_runner.go:130] > ]
	I1004 03:53:09.568444   48440 command_runner.go:130] > privileged_without_host_devices = false
	I1004 03:53:09.568454   48440 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1004 03:53:09.568459   48440 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1004 03:53:09.568471   48440 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1004 03:53:09.568481   48440 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1004 03:53:09.568488   48440 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1004 03:53:09.568496   48440 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1004 03:53:09.568507   48440 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1004 03:53:09.568515   48440 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1004 03:53:09.568520   48440 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1004 03:53:09.568527   48440 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1004 03:53:09.568530   48440 command_runner.go:130] > # Example:
	I1004 03:53:09.568534   48440 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1004 03:53:09.568539   48440 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1004 03:53:09.568543   48440 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1004 03:53:09.568547   48440 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1004 03:53:09.568551   48440 command_runner.go:130] > # cpuset = 0
	I1004 03:53:09.568555   48440 command_runner.go:130] > # cpushares = "0-1"
	I1004 03:53:09.568557   48440 command_runner.go:130] > # Where:
	I1004 03:53:09.568563   48440 command_runner.go:130] > # The workload name is workload-type.
	I1004 03:53:09.568569   48440 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1004 03:53:09.568574   48440 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1004 03:53:09.568579   48440 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1004 03:53:09.568586   48440 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1004 03:53:09.568591   48440 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1004 03:53:09.568595   48440 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1004 03:53:09.568601   48440 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1004 03:53:09.568608   48440 command_runner.go:130] > # Default value is set to true
	I1004 03:53:09.568612   48440 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1004 03:53:09.568617   48440 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1004 03:53:09.568622   48440 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1004 03:53:09.568627   48440 command_runner.go:130] > # Default value is set to 'false'
	I1004 03:53:09.568633   48440 command_runner.go:130] > # disable_hostport_mapping = false
	I1004 03:53:09.568639   48440 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1004 03:53:09.568643   48440 command_runner.go:130] > #
	I1004 03:53:09.568649   48440 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1004 03:53:09.568658   48440 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1004 03:53:09.568664   48440 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1004 03:53:09.568670   48440 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1004 03:53:09.568675   48440 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1004 03:53:09.568678   48440 command_runner.go:130] > [crio.image]
	I1004 03:53:09.568684   48440 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1004 03:53:09.568688   48440 command_runner.go:130] > # default_transport = "docker://"
	I1004 03:53:09.568696   48440 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1004 03:53:09.568701   48440 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1004 03:53:09.568705   48440 command_runner.go:130] > # global_auth_file = ""
	I1004 03:53:09.568710   48440 command_runner.go:130] > # The image used to instantiate infra containers.
	I1004 03:53:09.568714   48440 command_runner.go:130] > # This option supports live configuration reload.
	I1004 03:53:09.568718   48440 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1004 03:53:09.568724   48440 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1004 03:53:09.568730   48440 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1004 03:53:09.568734   48440 command_runner.go:130] > # This option supports live configuration reload.
	I1004 03:53:09.568744   48440 command_runner.go:130] > # pause_image_auth_file = ""
	I1004 03:53:09.568749   48440 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1004 03:53:09.568754   48440 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1004 03:53:09.568760   48440 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1004 03:53:09.568765   48440 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1004 03:53:09.568768   48440 command_runner.go:130] > # pause_command = "/pause"
	I1004 03:53:09.568774   48440 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1004 03:53:09.568779   48440 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1004 03:53:09.568783   48440 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1004 03:53:09.568789   48440 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1004 03:53:09.568794   48440 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1004 03:53:09.568799   48440 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1004 03:53:09.568802   48440 command_runner.go:130] > # pinned_images = [
	I1004 03:53:09.568806   48440 command_runner.go:130] > # ]
	I1004 03:53:09.568811   48440 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1004 03:53:09.568817   48440 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1004 03:53:09.568824   48440 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1004 03:53:09.568830   48440 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1004 03:53:09.568835   48440 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1004 03:53:09.568838   48440 command_runner.go:130] > # signature_policy = ""
	I1004 03:53:09.568843   48440 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1004 03:53:09.568852   48440 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1004 03:53:09.568858   48440 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1004 03:53:09.568864   48440 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1004 03:53:09.568869   48440 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1004 03:53:09.568877   48440 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1004 03:53:09.568882   48440 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1004 03:53:09.568892   48440 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1004 03:53:09.568896   48440 command_runner.go:130] > # changing them here.
	I1004 03:53:09.568900   48440 command_runner.go:130] > # insecure_registries = [
	I1004 03:53:09.568903   48440 command_runner.go:130] > # ]
	I1004 03:53:09.568909   48440 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1004 03:53:09.568916   48440 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1004 03:53:09.568926   48440 command_runner.go:130] > # image_volumes = "mkdir"
	I1004 03:53:09.568933   48440 command_runner.go:130] > # Temporary directory to use for storing big files
	I1004 03:53:09.568937   48440 command_runner.go:130] > # big_files_temporary_dir = ""
	I1004 03:53:09.568942   48440 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1004 03:53:09.568948   48440 command_runner.go:130] > # CNI plugins.
	I1004 03:53:09.568952   48440 command_runner.go:130] > [crio.network]
	I1004 03:53:09.568957   48440 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1004 03:53:09.568964   48440 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1004 03:53:09.568968   48440 command_runner.go:130] > # cni_default_network = ""
	I1004 03:53:09.568975   48440 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1004 03:53:09.568979   48440 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1004 03:53:09.568987   48440 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1004 03:53:09.568991   48440 command_runner.go:130] > # plugin_dirs = [
	I1004 03:53:09.568996   48440 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1004 03:53:09.568999   48440 command_runner.go:130] > # ]
	I1004 03:53:09.569005   48440 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1004 03:53:09.569011   48440 command_runner.go:130] > [crio.metrics]
	I1004 03:53:09.569015   48440 command_runner.go:130] > # Globally enable or disable metrics support.
	I1004 03:53:09.569019   48440 command_runner.go:130] > enable_metrics = true
	I1004 03:53:09.569023   48440 command_runner.go:130] > # Specify enabled metrics collectors.
	I1004 03:53:09.569030   48440 command_runner.go:130] > # Per default all metrics are enabled.
	I1004 03:53:09.569036   48440 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1004 03:53:09.569042   48440 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1004 03:53:09.569049   48440 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1004 03:53:09.569053   48440 command_runner.go:130] > # metrics_collectors = [
	I1004 03:53:09.569059   48440 command_runner.go:130] > # 	"operations",
	I1004 03:53:09.569063   48440 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1004 03:53:09.569067   48440 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1004 03:53:09.569071   48440 command_runner.go:130] > # 	"operations_errors",
	I1004 03:53:09.569075   48440 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1004 03:53:09.569083   48440 command_runner.go:130] > # 	"image_pulls_by_name",
	I1004 03:53:09.569090   48440 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1004 03:53:09.569094   48440 command_runner.go:130] > # 	"image_pulls_failures",
	I1004 03:53:09.569104   48440 command_runner.go:130] > # 	"image_pulls_successes",
	I1004 03:53:09.569109   48440 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1004 03:53:09.569112   48440 command_runner.go:130] > # 	"image_layer_reuse",
	I1004 03:53:09.569117   48440 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1004 03:53:09.569124   48440 command_runner.go:130] > # 	"containers_oom_total",
	I1004 03:53:09.569129   48440 command_runner.go:130] > # 	"containers_oom",
	I1004 03:53:09.569133   48440 command_runner.go:130] > # 	"processes_defunct",
	I1004 03:53:09.569137   48440 command_runner.go:130] > # 	"operations_total",
	I1004 03:53:09.569140   48440 command_runner.go:130] > # 	"operations_latency_seconds",
	I1004 03:53:09.569145   48440 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1004 03:53:09.569148   48440 command_runner.go:130] > # 	"operations_errors_total",
	I1004 03:53:09.569152   48440 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1004 03:53:09.569157   48440 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1004 03:53:09.569161   48440 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1004 03:53:09.569165   48440 command_runner.go:130] > # 	"image_pulls_success_total",
	I1004 03:53:09.569169   48440 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1004 03:53:09.569173   48440 command_runner.go:130] > # 	"containers_oom_count_total",
	I1004 03:53:09.569180   48440 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1004 03:53:09.569184   48440 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1004 03:53:09.569189   48440 command_runner.go:130] > # ]
	I1004 03:53:09.569194   48440 command_runner.go:130] > # The port on which the metrics server will listen.
	I1004 03:53:09.569200   48440 command_runner.go:130] > # metrics_port = 9090
	I1004 03:53:09.569205   48440 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1004 03:53:09.569211   48440 command_runner.go:130] > # metrics_socket = ""
	I1004 03:53:09.569218   48440 command_runner.go:130] > # The certificate for the secure metrics server.
	I1004 03:53:09.569226   48440 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1004 03:53:09.569232   48440 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1004 03:53:09.569237   48440 command_runner.go:130] > # certificate on any modification event.
	I1004 03:53:09.569240   48440 command_runner.go:130] > # metrics_cert = ""
	I1004 03:53:09.569245   48440 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1004 03:53:09.569252   48440 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1004 03:53:09.569255   48440 command_runner.go:130] > # metrics_key = ""
	I1004 03:53:09.569261   48440 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1004 03:53:09.569267   48440 command_runner.go:130] > [crio.tracing]
	I1004 03:53:09.569272   48440 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1004 03:53:09.569277   48440 command_runner.go:130] > # enable_tracing = false
	I1004 03:53:09.569282   48440 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1004 03:53:09.569286   48440 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1004 03:53:09.569293   48440 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1004 03:53:09.569299   48440 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1004 03:53:09.569304   48440 command_runner.go:130] > # CRI-O NRI configuration.
	I1004 03:53:09.569309   48440 command_runner.go:130] > [crio.nri]
	I1004 03:53:09.569314   48440 command_runner.go:130] > # Globally enable or disable NRI.
	I1004 03:53:09.569319   48440 command_runner.go:130] > # enable_nri = false
	I1004 03:53:09.569323   48440 command_runner.go:130] > # NRI socket to listen on.
	I1004 03:53:09.569327   48440 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1004 03:53:09.569333   48440 command_runner.go:130] > # NRI plugin directory to use.
	I1004 03:53:09.569338   48440 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1004 03:53:09.569347   48440 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1004 03:53:09.569354   48440 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1004 03:53:09.569359   48440 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1004 03:53:09.569365   48440 command_runner.go:130] > # nri_disable_connections = false
	I1004 03:53:09.569370   48440 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1004 03:53:09.569375   48440 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1004 03:53:09.569380   48440 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1004 03:53:09.569386   48440 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1004 03:53:09.569392   48440 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1004 03:53:09.569398   48440 command_runner.go:130] > [crio.stats]
	I1004 03:53:09.569403   48440 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1004 03:53:09.569410   48440 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1004 03:53:09.569414   48440 command_runner.go:130] > # stats_collection_period = 0
	I1004 03:53:09.569660   48440 command_runner.go:130] ! time="2024-10-04 03:53:09.532926383Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1004 03:53:09.569685   48440 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1004 03:53:09.569757   48440 cni.go:84] Creating CNI manager for ""
	I1004 03:53:09.569769   48440 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1004 03:53:09.569778   48440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 03:53:09.569800   48440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-355278 NodeName:multinode-355278 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 03:53:09.569937   48440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-355278"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 03:53:09.569997   48440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 03:53:09.580438   48440 command_runner.go:130] > kubeadm
	I1004 03:53:09.580461   48440 command_runner.go:130] > kubectl
	I1004 03:53:09.580467   48440 command_runner.go:130] > kubelet
	I1004 03:53:09.580504   48440 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 03:53:09.580563   48440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 03:53:09.590521   48440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1004 03:53:09.608031   48440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:53:09.625081   48440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1004 03:53:09.642070   48440 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I1004 03:53:09.646149   48440 command_runner.go:130] > 192.168.39.50	control-plane.minikube.internal
	I1004 03:53:09.646230   48440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:53:09.783962   48440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:53:09.799133   48440 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278 for IP: 192.168.39.50
	I1004 03:53:09.799159   48440 certs.go:194] generating shared ca certs ...
	I1004 03:53:09.799183   48440 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:53:09.799355   48440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 03:53:09.799410   48440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 03:53:09.799423   48440 certs.go:256] generating profile certs ...
	I1004 03:53:09.799509   48440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/client.key
	I1004 03:53:09.799606   48440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/apiserver.key.a40bf4c6
	I1004 03:53:09.799674   48440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/proxy-client.key
	I1004 03:53:09.799687   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 03:53:09.799717   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 03:53:09.799735   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 03:53:09.799757   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 03:53:09.799775   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 03:53:09.799816   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 03:53:09.799832   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 03:53:09.799847   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 03:53:09.799902   48440 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 03:53:09.799937   48440 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 03:53:09.799946   48440 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:53:09.799969   48440 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 03:53:09.799991   48440 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:53:09.800012   48440 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 03:53:09.800046   48440 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 03:53:09.800071   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> /usr/share/ca-certificates/168792.pem
	I1004 03:53:09.800084   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:53:09.800096   48440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem -> /usr/share/ca-certificates/16879.pem
	I1004 03:53:09.800655   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:53:09.825854   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 03:53:09.850027   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:53:09.876149   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 03:53:09.900789   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1004 03:53:09.925749   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 03:53:09.950235   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:53:09.975491   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/multinode-355278/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:53:10.000202   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 03:53:10.025938   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:53:10.052344   48440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 03:53:10.077556   48440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 03:53:10.094797   48440 ssh_runner.go:195] Run: openssl version
	I1004 03:53:10.101021   48440 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1004 03:53:10.101082   48440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:53:10.112432   48440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:53:10.117158   48440 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:53:10.117336   48440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:53:10.117401   48440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:53:10.123054   48440 command_runner.go:130] > b5213941
	I1004 03:53:10.123112   48440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:53:10.132553   48440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 03:53:10.143743   48440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 03:53:10.148480   48440 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:53:10.148516   48440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 03:53:10.148573   48440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 03:53:10.155102   48440 command_runner.go:130] > 51391683
	I1004 03:53:10.155180   48440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 03:53:10.164713   48440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 03:53:10.175642   48440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 03:53:10.180194   48440 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:53:10.180311   48440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 03:53:10.180359   48440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 03:53:10.186171   48440 command_runner.go:130] > 3ec20f2e
	I1004 03:53:10.186261   48440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:53:10.195634   48440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:53:10.200465   48440 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:53:10.200486   48440 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1004 03:53:10.200492   48440 command_runner.go:130] > Device: 253,1	Inode: 9431080     Links: 1
	I1004 03:53:10.200499   48440 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1004 03:53:10.200508   48440 command_runner.go:130] > Access: 2024-10-04 03:46:16.286984072 +0000
	I1004 03:53:10.200514   48440 command_runner.go:130] > Modify: 2024-10-04 03:46:16.286984072 +0000
	I1004 03:53:10.200524   48440 command_runner.go:130] > Change: 2024-10-04 03:46:16.286984072 +0000
	I1004 03:53:10.200531   48440 command_runner.go:130] >  Birth: 2024-10-04 03:46:16.286984072 +0000
	I1004 03:53:10.200596   48440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 03:53:10.206382   48440 command_runner.go:130] > Certificate will not expire
	I1004 03:53:10.206451   48440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 03:53:10.212424   48440 command_runner.go:130] > Certificate will not expire
	I1004 03:53:10.212496   48440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 03:53:10.218219   48440 command_runner.go:130] > Certificate will not expire
	I1004 03:53:10.218286   48440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 03:53:10.223734   48440 command_runner.go:130] > Certificate will not expire
	I1004 03:53:10.224032   48440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 03:53:10.229781   48440 command_runner.go:130] > Certificate will not expire
	I1004 03:53:10.230052   48440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 03:53:10.236210   48440 command_runner.go:130] > Certificate will not expire
	I1004 03:53:10.236289   48440 kubeadm.go:392] StartCluster: {Name:multinode-355278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:multinode-355278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:53:10.236435   48440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 03:53:10.236482   48440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 03:53:10.278178   48440 command_runner.go:130] > 39a1ba2038e63d37b1d5f6533e8a48537d6f340aa3d13386b09dea55f6c22bde
	I1004 03:53:10.278271   48440 command_runner.go:130] > 7539f13d2609f891b7c1f281a29b0fd3ced6da7f0bb4aaec10bf7effb8ac2aec
	I1004 03:53:10.278311   48440 command_runner.go:130] > 6e7a1e7686c42fd4e684ccf5b0bb9ba22216642a608e382b7792b5b05c69b917
	I1004 03:53:10.278489   48440 command_runner.go:130] > 71f8b904bf2474edb78e656c449ae2877649b759936059875692a6a65aff51b5
	I1004 03:53:10.278551   48440 command_runner.go:130] > af880375229d67caf4e5f2f47f45f53fbe2ea8a7929ddfbed89ae712f1df9782
	I1004 03:53:10.278621   48440 command_runner.go:130] > b2c4811c6b28cad42ef132c3e4f94439f6a414a115217beca429f3f52c44a124
	I1004 03:53:10.278686   48440 command_runner.go:130] > b52fb2f1d2ee4270424d69c08b4c23e2cb78fbf86cfbe91d7fe5854543fb3a00
	I1004 03:53:10.278897   48440 command_runner.go:130] > 45cd0fd028aa821ba70f413472a0632ce6257bd3c40aa7e6498175238374a2d5
	I1004 03:53:10.278916   48440 command_runner.go:130] > 63272753e04e6a82d0e74cf60c149ce5823931f9d15dee5a6c9cad14acfbc509
	I1004 03:53:10.280492   48440 cri.go:89] found id: "39a1ba2038e63d37b1d5f6533e8a48537d6f340aa3d13386b09dea55f6c22bde"
	I1004 03:53:10.280507   48440 cri.go:89] found id: "7539f13d2609f891b7c1f281a29b0fd3ced6da7f0bb4aaec10bf7effb8ac2aec"
	I1004 03:53:10.280512   48440 cri.go:89] found id: "6e7a1e7686c42fd4e684ccf5b0bb9ba22216642a608e382b7792b5b05c69b917"
	I1004 03:53:10.280515   48440 cri.go:89] found id: "71f8b904bf2474edb78e656c449ae2877649b759936059875692a6a65aff51b5"
	I1004 03:53:10.280518   48440 cri.go:89] found id: "af880375229d67caf4e5f2f47f45f53fbe2ea8a7929ddfbed89ae712f1df9782"
	I1004 03:53:10.280524   48440 cri.go:89] found id: "b2c4811c6b28cad42ef132c3e4f94439f6a414a115217beca429f3f52c44a124"
	I1004 03:53:10.280527   48440 cri.go:89] found id: "b52fb2f1d2ee4270424d69c08b4c23e2cb78fbf86cfbe91d7fe5854543fb3a00"
	I1004 03:53:10.280530   48440 cri.go:89] found id: "45cd0fd028aa821ba70f413472a0632ce6257bd3c40aa7e6498175238374a2d5"
	I1004 03:53:10.280533   48440 cri.go:89] found id: "63272753e04e6a82d0e74cf60c149ce5823931f9d15dee5a6c9cad14acfbc509"
	I1004 03:53:10.280539   48440 cri.go:89] found id: ""
	I1004 03:53:10.280591   48440 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-355278 -n multinode-355278
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-355278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.44s)

                                                
                                    
x
+
TestPreload (271.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-375700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1004 04:01:52.070520   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:02:08.996337   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:02:15.016926   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-375700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m8.275919138s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-375700 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-375700 image pull gcr.io/k8s-minikube/busybox: (3.464589814s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-375700
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-375700: exit status 82 (2m0.477988758s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-375700"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-375700 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-10-04 04:05:24.971303722 +0000 UTC m=+4643.904244275
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-375700 -n test-preload-375700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-375700 -n test-preload-375700: exit status 3 (18.652124277s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:05:43.620143   53318 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E1004 04:05:43.620168   53318 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-375700" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-375700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-375700
--- FAIL: TestPreload (271.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (391.47s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-326061 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-326061 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m2.599072338s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-326061] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-326061" primary control-plane node in "kubernetes-upgrade-326061" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 04:07:39.291311   54385 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:07:39.291429   54385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:07:39.291434   54385 out.go:358] Setting ErrFile to fd 2...
	I1004 04:07:39.291438   54385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:07:39.291627   54385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:07:39.292223   54385 out.go:352] Setting JSON to false
	I1004 04:07:39.293092   54385 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6604,"bootTime":1728008255,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:07:39.293191   54385 start.go:139] virtualization: kvm guest
	I1004 04:07:39.294631   54385 out.go:177] * [kubernetes-upgrade-326061] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:07:39.296232   54385 notify.go:220] Checking for updates...
	I1004 04:07:39.297287   54385 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:07:39.298550   54385 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:07:39.299723   54385 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:07:39.300819   54385 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:07:39.302281   54385 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:07:39.303568   54385 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:07:39.304945   54385 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:07:39.342462   54385 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 04:07:39.343593   54385 start.go:297] selected driver: kvm2
	I1004 04:07:39.343614   54385 start.go:901] validating driver "kvm2" against <nil>
	I1004 04:07:39.343629   54385 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:07:39.344567   54385 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:07:42.027694   54385 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:07:42.044649   54385 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:07:42.044740   54385 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 04:07:42.045088   54385 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1004 04:07:42.045135   54385 cni.go:84] Creating CNI manager for ""
	I1004 04:07:42.045193   54385 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:07:42.045209   54385 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 04:07:42.045289   54385 start.go:340] cluster config:
	{Name:kubernetes-upgrade-326061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-326061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:07:42.045430   54385 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:07:42.047079   54385 out.go:177] * Starting "kubernetes-upgrade-326061" primary control-plane node in "kubernetes-upgrade-326061" cluster
	I1004 04:07:42.048373   54385 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 04:07:42.048418   54385 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1004 04:07:42.048443   54385 cache.go:56] Caching tarball of preloaded images
	I1004 04:07:42.048545   54385 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:07:42.048558   54385 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1004 04:07:42.048951   54385 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/config.json ...
	I1004 04:07:42.048982   54385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/config.json: {Name:mk87618e3d2170e32bb9c3818bf7ae8fff7c20bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:07:42.049124   54385 start.go:360] acquireMachinesLock for kubernetes-upgrade-326061: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:08:11.920801   54385 start.go:364] duration metric: took 29.871626831s to acquireMachinesLock for "kubernetes-upgrade-326061"
	I1004 04:08:11.920870   54385 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-326061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-326061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:08:11.920964   54385 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 04:08:11.923166   54385 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 04:08:11.923402   54385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:08:11.923461   54385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:08:11.940803   54385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35349
	I1004 04:08:11.941290   54385 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:08:11.941949   54385 main.go:141] libmachine: Using API Version  1
	I1004 04:08:11.941970   54385 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:08:11.942301   54385 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:08:11.942482   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetMachineName
	I1004 04:08:11.942649   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .DriverName
	I1004 04:08:11.942779   54385 start.go:159] libmachine.API.Create for "kubernetes-upgrade-326061" (driver="kvm2")
	I1004 04:08:11.942814   54385 client.go:168] LocalClient.Create starting
	I1004 04:08:11.942848   54385 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 04:08:11.942883   54385 main.go:141] libmachine: Decoding PEM data...
	I1004 04:08:11.942918   54385 main.go:141] libmachine: Parsing certificate...
	I1004 04:08:11.942973   54385 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 04:08:11.942992   54385 main.go:141] libmachine: Decoding PEM data...
	I1004 04:08:11.943003   54385 main.go:141] libmachine: Parsing certificate...
	I1004 04:08:11.943021   54385 main.go:141] libmachine: Running pre-create checks...
	I1004 04:08:11.943033   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .PreCreateCheck
	I1004 04:08:11.943397   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetConfigRaw
	I1004 04:08:11.943767   54385 main.go:141] libmachine: Creating machine...
	I1004 04:08:11.943803   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .Create
	I1004 04:08:11.943952   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Creating KVM machine...
	I1004 04:08:11.945212   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found existing default KVM network
	I1004 04:08:11.945991   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:11.945841   56967 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:58:43:e4} reservation:<nil>}
	I1004 04:08:11.946627   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:11.946538   56967 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201a50}
	I1004 04:08:11.946662   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | created network xml: 
	I1004 04:08:11.946675   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | <network>
	I1004 04:08:11.946684   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG |   <name>mk-kubernetes-upgrade-326061</name>
	I1004 04:08:11.946709   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG |   <dns enable='no'/>
	I1004 04:08:11.946730   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG |   
	I1004 04:08:11.946745   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1004 04:08:11.946779   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG |     <dhcp>
	I1004 04:08:11.946794   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1004 04:08:11.946807   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG |     </dhcp>
	I1004 04:08:11.946819   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG |   </ip>
	I1004 04:08:11.946827   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG |   
	I1004 04:08:11.946838   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | </network>
	I1004 04:08:11.946847   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | 
	I1004 04:08:11.951763   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | trying to create private KVM network mk-kubernetes-upgrade-326061 192.168.50.0/24...
	I1004 04:08:12.024618   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | private KVM network mk-kubernetes-upgrade-326061 192.168.50.0/24 created
	I1004 04:08:12.024663   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:12.024589   56967 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:08:12.024677   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061 ...
	I1004 04:08:12.024697   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 04:08:12.024722   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 04:08:12.260084   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:12.259894   56967 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061/id_rsa...
	I1004 04:08:12.533259   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:12.533076   56967 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061/kubernetes-upgrade-326061.rawdisk...
	I1004 04:08:12.533301   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Writing magic tar header
	I1004 04:08:12.533322   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Writing SSH key tar header
	I1004 04:08:12.533337   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:12.533194   56967 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061 ...
	I1004 04:08:12.533350   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061 (perms=drwx------)
	I1004 04:08:12.533366   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 04:08:12.533377   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 04:08:12.533399   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061
	I1004 04:08:12.533418   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 04:08:12.533432   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:08:12.533489   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 04:08:12.533538   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 04:08:12.533551   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 04:08:12.533565   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 04:08:12.533576   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Creating domain...
	I1004 04:08:12.533608   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 04:08:12.533628   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Checking permissions on dir: /home/jenkins
	I1004 04:08:12.533638   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Checking permissions on dir: /home
	I1004 04:08:12.533649   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Skipping /home - not owner
	I1004 04:08:12.534578   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) define libvirt domain using xml: 
	I1004 04:08:12.534599   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) <domain type='kvm'>
	I1004 04:08:12.534607   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)   <name>kubernetes-upgrade-326061</name>
	I1004 04:08:12.534616   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)   <memory unit='MiB'>2200</memory>
	I1004 04:08:12.534623   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)   <vcpu>2</vcpu>
	I1004 04:08:12.534630   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)   <features>
	I1004 04:08:12.534637   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     <acpi/>
	I1004 04:08:12.534646   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     <apic/>
	I1004 04:08:12.534655   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     <pae/>
	I1004 04:08:12.534664   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     
	I1004 04:08:12.534675   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)   </features>
	I1004 04:08:12.534682   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)   <cpu mode='host-passthrough'>
	I1004 04:08:12.534687   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)   
	I1004 04:08:12.534694   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)   </cpu>
	I1004 04:08:12.534698   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)   <os>
	I1004 04:08:12.534703   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     <type>hvm</type>
	I1004 04:08:12.534730   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     <boot dev='cdrom'/>
	I1004 04:08:12.534772   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     <boot dev='hd'/>
	I1004 04:08:12.534785   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     <bootmenu enable='no'/>
	I1004 04:08:12.534795   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)   </os>
	I1004 04:08:12.534803   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)   <devices>
	I1004 04:08:12.534813   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     <disk type='file' device='cdrom'>
	I1004 04:08:12.534836   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061/boot2docker.iso'/>
	I1004 04:08:12.534850   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)       <target dev='hdc' bus='scsi'/>
	I1004 04:08:12.534862   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)       <readonly/>
	I1004 04:08:12.534871   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     </disk>
	I1004 04:08:12.534884   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     <disk type='file' device='disk'>
	I1004 04:08:12.534896   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 04:08:12.534913   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061/kubernetes-upgrade-326061.rawdisk'/>
	I1004 04:08:12.534927   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)       <target dev='hda' bus='virtio'/>
	I1004 04:08:12.534938   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     </disk>
	I1004 04:08:12.534949   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     <interface type='network'>
	I1004 04:08:12.534958   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)       <source network='mk-kubernetes-upgrade-326061'/>
	I1004 04:08:12.534968   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)       <model type='virtio'/>
	I1004 04:08:12.534975   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     </interface>
	I1004 04:08:12.534990   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     <interface type='network'>
	I1004 04:08:12.535006   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)       <source network='default'/>
	I1004 04:08:12.535018   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)       <model type='virtio'/>
	I1004 04:08:12.535027   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     </interface>
	I1004 04:08:12.535036   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     <serial type='pty'>
	I1004 04:08:12.535046   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)       <target port='0'/>
	I1004 04:08:12.535055   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     </serial>
	I1004 04:08:12.535064   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     <console type='pty'>
	I1004 04:08:12.535074   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)       <target type='serial' port='0'/>
	I1004 04:08:12.535088   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     </console>
	I1004 04:08:12.535099   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     <rng model='virtio'>
	I1004 04:08:12.535110   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)       <backend model='random'>/dev/random</backend>
	I1004 04:08:12.535120   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     </rng>
	I1004 04:08:12.535129   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     
	I1004 04:08:12.535137   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)     
	I1004 04:08:12.535158   54385 main.go:141] libmachine: (kubernetes-upgrade-326061)   </devices>
	I1004 04:08:12.535173   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) </domain>
	I1004 04:08:12.535187   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) 
	I1004 04:08:12.539384   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:12:bf:be in network default
	I1004 04:08:12.540019   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Ensuring networks are active...
	I1004 04:08:12.540044   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:12.540779   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Ensuring network default is active
	I1004 04:08:12.541174   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Ensuring network mk-kubernetes-upgrade-326061 is active
	I1004 04:08:12.541743   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Getting domain xml...
	I1004 04:08:12.542377   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Creating domain...
	I1004 04:08:13.894585   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Waiting to get IP...
	I1004 04:08:13.895554   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:13.895981   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find current IP address of domain kubernetes-upgrade-326061 in network mk-kubernetes-upgrade-326061
	I1004 04:08:13.896062   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:13.895980   56967 retry.go:31] will retry after 227.822277ms: waiting for machine to come up
	I1004 04:08:14.125723   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:14.126263   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find current IP address of domain kubernetes-upgrade-326061 in network mk-kubernetes-upgrade-326061
	I1004 04:08:14.126290   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:14.126212   56967 retry.go:31] will retry after 369.756988ms: waiting for machine to come up
	I1004 04:08:14.497849   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:14.498247   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find current IP address of domain kubernetes-upgrade-326061 in network mk-kubernetes-upgrade-326061
	I1004 04:08:14.498274   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:14.498201   56967 retry.go:31] will retry after 377.965765ms: waiting for machine to come up
	I1004 04:08:14.878053   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:14.878634   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find current IP address of domain kubernetes-upgrade-326061 in network mk-kubernetes-upgrade-326061
	I1004 04:08:14.878666   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:14.878603   56967 retry.go:31] will retry after 444.01649ms: waiting for machine to come up
	I1004 04:08:15.324523   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:15.325022   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find current IP address of domain kubernetes-upgrade-326061 in network mk-kubernetes-upgrade-326061
	I1004 04:08:15.325050   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:15.324982   56967 retry.go:31] will retry after 692.691991ms: waiting for machine to come up
	I1004 04:08:16.018779   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:16.019191   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find current IP address of domain kubernetes-upgrade-326061 in network mk-kubernetes-upgrade-326061
	I1004 04:08:16.019215   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:16.019135   56967 retry.go:31] will retry after 874.353739ms: waiting for machine to come up
	I1004 04:08:16.894863   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:16.895349   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find current IP address of domain kubernetes-upgrade-326061 in network mk-kubernetes-upgrade-326061
	I1004 04:08:16.895391   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:16.895311   56967 retry.go:31] will retry after 1.017200956s: waiting for machine to come up
	I1004 04:08:17.913928   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:17.914542   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find current IP address of domain kubernetes-upgrade-326061 in network mk-kubernetes-upgrade-326061
	I1004 04:08:17.914569   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:17.914499   56967 retry.go:31] will retry after 915.785032ms: waiting for machine to come up
	I1004 04:08:18.832026   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:18.832548   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find current IP address of domain kubernetes-upgrade-326061 in network mk-kubernetes-upgrade-326061
	I1004 04:08:18.832571   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:18.832486   56967 retry.go:31] will retry after 1.449290907s: waiting for machine to come up
	I1004 04:08:20.283228   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:20.283634   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find current IP address of domain kubernetes-upgrade-326061 in network mk-kubernetes-upgrade-326061
	I1004 04:08:20.283659   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:20.283599   56967 retry.go:31] will retry after 1.423361853s: waiting for machine to come up
	I1004 04:08:21.709067   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:21.709505   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find current IP address of domain kubernetes-upgrade-326061 in network mk-kubernetes-upgrade-326061
	I1004 04:08:21.709544   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:21.709461   56967 retry.go:31] will retry after 2.689881876s: waiting for machine to come up
	I1004 04:08:24.401768   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:24.402335   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find current IP address of domain kubernetes-upgrade-326061 in network mk-kubernetes-upgrade-326061
	I1004 04:08:24.402362   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:24.402264   56967 retry.go:31] will retry after 2.460824647s: waiting for machine to come up
	I1004 04:08:26.864753   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:26.865161   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find current IP address of domain kubernetes-upgrade-326061 in network mk-kubernetes-upgrade-326061
	I1004 04:08:26.865183   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:26.865120   56967 retry.go:31] will retry after 3.633666237s: waiting for machine to come up
	I1004 04:08:30.503303   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:30.503773   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find current IP address of domain kubernetes-upgrade-326061 in network mk-kubernetes-upgrade-326061
	I1004 04:08:30.503802   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | I1004 04:08:30.503716   56967 retry.go:31] will retry after 4.336145506s: waiting for machine to come up
	I1004 04:08:34.841028   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:34.841542   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Found IP for machine: 192.168.50.58
	I1004 04:08:34.841566   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Reserving static IP address...
	I1004 04:08:34.841580   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has current primary IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:34.841956   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-326061", mac: "52:54:00:ca:71:c1", ip: "192.168.50.58"} in network mk-kubernetes-upgrade-326061
	I1004 04:08:34.922532   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Getting to WaitForSSH function...
	I1004 04:08:34.922569   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Reserved static IP address: 192.168.50.58
	I1004 04:08:34.922585   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Waiting for SSH to be available...
	I1004 04:08:34.925781   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:34.926317   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:34.926344   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:34.926531   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Using SSH client type: external
	I1004 04:08:34.926562   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061/id_rsa (-rw-------)
	I1004 04:08:34.926608   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:08:34.926622   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | About to run SSH command:
	I1004 04:08:34.926638   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | exit 0
	I1004 04:08:35.048315   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | SSH cmd err, output: <nil>: 
	I1004 04:08:35.048600   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) KVM machine creation complete!
	I1004 04:08:35.048997   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetConfigRaw
	I1004 04:08:35.049592   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .DriverName
	I1004 04:08:35.049842   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .DriverName
	I1004 04:08:35.050017   54385 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 04:08:35.050034   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetState
	I1004 04:08:35.051555   54385 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 04:08:35.051573   54385 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 04:08:35.051581   54385 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 04:08:35.051590   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHHostname
	I1004 04:08:35.054388   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:35.055122   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:35.055177   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:35.055418   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHPort
	I1004 04:08:35.055646   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:35.055823   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:35.055947   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHUsername
	I1004 04:08:35.056161   54385 main.go:141] libmachine: Using SSH client type: native
	I1004 04:08:35.056348   54385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.58 22 <nil> <nil>}
	I1004 04:08:35.056359   54385 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 04:08:35.159728   54385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:08:35.159753   54385 main.go:141] libmachine: Detecting the provisioner...
	I1004 04:08:35.159761   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHHostname
	I1004 04:08:35.163227   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:35.163614   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:35.163643   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:35.163860   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHPort
	I1004 04:08:35.164067   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:35.164208   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:35.164373   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHUsername
	I1004 04:08:35.164578   54385 main.go:141] libmachine: Using SSH client type: native
	I1004 04:08:35.164740   54385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.58 22 <nil> <nil>}
	I1004 04:08:35.164749   54385 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 04:08:35.265082   54385 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 04:08:35.265127   54385 main.go:141] libmachine: found compatible host: buildroot
	I1004 04:08:35.265134   54385 main.go:141] libmachine: Provisioning with buildroot...
	I1004 04:08:35.265141   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetMachineName
	I1004 04:08:35.265387   54385 buildroot.go:166] provisioning hostname "kubernetes-upgrade-326061"
	I1004 04:08:35.265424   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetMachineName
	I1004 04:08:35.265647   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHHostname
	I1004 04:08:35.268330   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:35.268665   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:35.268693   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:35.268891   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHPort
	I1004 04:08:35.269081   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:35.269248   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:35.269393   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHUsername
	I1004 04:08:35.269539   54385 main.go:141] libmachine: Using SSH client type: native
	I1004 04:08:35.269729   54385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.58 22 <nil> <nil>}
	I1004 04:08:35.269746   54385 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-326061 && echo "kubernetes-upgrade-326061" | sudo tee /etc/hostname
	I1004 04:08:35.384020   54385 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-326061
	
	I1004 04:08:35.384049   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHHostname
	I1004 04:08:35.386901   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:35.387334   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:35.387361   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:35.387527   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHPort
	I1004 04:08:35.387719   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:35.387889   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:35.388044   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHUsername
	I1004 04:08:35.388216   54385 main.go:141] libmachine: Using SSH client type: native
	I1004 04:08:35.388449   54385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.58 22 <nil> <nil>}
	I1004 04:08:35.388474   54385 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-326061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-326061/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-326061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:08:35.497593   54385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:08:35.497619   54385 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:08:35.497651   54385 buildroot.go:174] setting up certificates
	I1004 04:08:35.497661   54385 provision.go:84] configureAuth start
	I1004 04:08:35.497675   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetMachineName
	I1004 04:08:35.497918   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetIP
	I1004 04:08:35.500290   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:35.500588   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:35.500614   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:35.500798   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHHostname
	I1004 04:08:35.502847   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:35.503161   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:35.503194   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:35.503315   54385 provision.go:143] copyHostCerts
	I1004 04:08:35.503383   54385 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:08:35.503395   54385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:08:35.503454   54385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:08:35.503560   54385 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:08:35.503568   54385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:08:35.503594   54385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:08:35.503676   54385 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:08:35.503683   54385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:08:35.503701   54385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:08:35.503758   54385 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-326061 san=[127.0.0.1 192.168.50.58 kubernetes-upgrade-326061 localhost minikube]
	I1004 04:08:35.901991   54385 provision.go:177] copyRemoteCerts
	I1004 04:08:35.902049   54385 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:08:35.902073   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHHostname
	I1004 04:08:35.904663   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:35.905063   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:35.905091   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:35.905303   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHPort
	I1004 04:08:35.905528   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:35.905692   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHUsername
	I1004 04:08:35.905814   54385 sshutil.go:53] new ssh client: &{IP:192.168.50.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061/id_rsa Username:docker}
	I1004 04:08:35.986810   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:08:36.012844   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1004 04:08:36.039457   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 04:08:36.066508   54385 provision.go:87] duration metric: took 568.832742ms to configureAuth
	I1004 04:08:36.066543   54385 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:08:36.066734   54385 config.go:182] Loaded profile config "kubernetes-upgrade-326061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1004 04:08:36.066822   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHHostname
	I1004 04:08:36.069345   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.069769   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:36.069799   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.069965   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHPort
	I1004 04:08:36.070136   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:36.070279   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:36.070394   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHUsername
	I1004 04:08:36.070514   54385 main.go:141] libmachine: Using SSH client type: native
	I1004 04:08:36.070682   54385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.58 22 <nil> <nil>}
	I1004 04:08:36.070701   54385 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:08:36.289257   54385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:08:36.289287   54385 main.go:141] libmachine: Checking connection to Docker...
	I1004 04:08:36.289299   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetURL
	I1004 04:08:36.290526   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Using libvirt version 6000000
	I1004 04:08:36.292534   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.292864   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:36.292889   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.293011   54385 main.go:141] libmachine: Docker is up and running!
	I1004 04:08:36.293025   54385 main.go:141] libmachine: Reticulating splines...
	I1004 04:08:36.293033   54385 client.go:171] duration metric: took 24.350208703s to LocalClient.Create
	I1004 04:08:36.293060   54385 start.go:167] duration metric: took 24.350282606s to libmachine.API.Create "kubernetes-upgrade-326061"
	I1004 04:08:36.293073   54385 start.go:293] postStartSetup for "kubernetes-upgrade-326061" (driver="kvm2")
	I1004 04:08:36.293088   54385 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:08:36.293112   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .DriverName
	I1004 04:08:36.293365   54385 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:08:36.293390   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHHostname
	I1004 04:08:36.295226   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.295530   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:36.295568   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.295675   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHPort
	I1004 04:08:36.295867   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:36.296024   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHUsername
	I1004 04:08:36.296135   54385 sshutil.go:53] new ssh client: &{IP:192.168.50.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061/id_rsa Username:docker}
	I1004 04:08:36.374705   54385 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:08:36.379411   54385 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:08:36.379435   54385 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:08:36.379497   54385 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:08:36.379572   54385 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:08:36.379668   54385 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:08:36.390032   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:08:36.415412   54385 start.go:296] duration metric: took 122.324091ms for postStartSetup
	I1004 04:08:36.415461   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetConfigRaw
	I1004 04:08:36.416075   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetIP
	I1004 04:08:36.418908   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.419333   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:36.419366   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.419618   54385 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/config.json ...
	I1004 04:08:36.419849   54385 start.go:128] duration metric: took 24.49887346s to createHost
	I1004 04:08:36.419873   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHHostname
	I1004 04:08:36.422040   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.422453   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:36.422484   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.422620   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHPort
	I1004 04:08:36.422788   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:36.422934   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:36.423058   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHUsername
	I1004 04:08:36.423190   54385 main.go:141] libmachine: Using SSH client type: native
	I1004 04:08:36.423367   54385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.58 22 <nil> <nil>}
	I1004 04:08:36.423379   54385 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:08:36.525530   54385 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728014916.495852006
	
	I1004 04:08:36.525551   54385 fix.go:216] guest clock: 1728014916.495852006
	I1004 04:08:36.525559   54385 fix.go:229] Guest: 2024-10-04 04:08:36.495852006 +0000 UTC Remote: 2024-10-04 04:08:36.419861545 +0000 UTC m=+57.183677249 (delta=75.990461ms)
	I1004 04:08:36.525578   54385 fix.go:200] guest clock delta is within tolerance: 75.990461ms
	I1004 04:08:36.525583   54385 start.go:83] releasing machines lock for "kubernetes-upgrade-326061", held for 24.604749897s
	I1004 04:08:36.525606   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .DriverName
	I1004 04:08:36.525887   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetIP
	I1004 04:08:36.528765   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.529118   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:36.529149   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.529341   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .DriverName
	I1004 04:08:36.529847   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .DriverName
	I1004 04:08:36.530022   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .DriverName
	I1004 04:08:36.530102   54385 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:08:36.530144   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHHostname
	I1004 04:08:36.530244   54385 ssh_runner.go:195] Run: cat /version.json
	I1004 04:08:36.530282   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHHostname
	I1004 04:08:36.533063   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.533442   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:36.533472   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.533494   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.533653   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHPort
	I1004 04:08:36.533886   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:36.534052   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHUsername
	I1004 04:08:36.534059   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:36.534088   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:36.534199   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHPort
	I1004 04:08:36.534217   54385 sshutil.go:53] new ssh client: &{IP:192.168.50.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061/id_rsa Username:docker}
	I1004 04:08:36.534367   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:08:36.534512   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHUsername
	I1004 04:08:36.534656   54385 sshutil.go:53] new ssh client: &{IP:192.168.50.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061/id_rsa Username:docker}
	I1004 04:08:36.609689   54385 ssh_runner.go:195] Run: systemctl --version
	I1004 04:08:36.634648   54385 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:08:36.805375   54385 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:08:36.813191   54385 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:08:36.813290   54385 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:08:36.831145   54385 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:08:36.831196   54385 start.go:495] detecting cgroup driver to use...
	I1004 04:08:36.831278   54385 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:08:36.852810   54385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:08:36.869956   54385 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:08:36.870025   54385 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:08:36.887946   54385 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:08:36.904042   54385 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:08:37.036793   54385 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:08:37.218028   54385 docker.go:233] disabling docker service ...
	I1004 04:08:37.218113   54385 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:08:37.240395   54385 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:08:37.254764   54385 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:08:37.378944   54385 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:08:37.498208   54385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:08:37.515525   54385 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:08:37.536524   54385 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1004 04:08:37.536593   54385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:08:37.548779   54385 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:08:37.548851   54385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:08:37.562040   54385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:08:37.575277   54385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:08:37.588848   54385 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:08:37.602193   54385 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:08:37.612790   54385 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:08:37.612851   54385 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:08:37.627886   54385 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:08:37.639966   54385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:08:37.761677   54385 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:08:37.873334   54385 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:08:37.873455   54385 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:08:37.879200   54385 start.go:563] Will wait 60s for crictl version
	I1004 04:08:37.879257   54385 ssh_runner.go:195] Run: which crictl
	I1004 04:08:37.883525   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:08:37.932825   54385 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:08:37.932911   54385 ssh_runner.go:195] Run: crio --version
	I1004 04:08:37.963930   54385 ssh_runner.go:195] Run: crio --version
	I1004 04:08:38.002067   54385 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1004 04:08:38.003400   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetIP
	I1004 04:08:38.006620   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:38.007094   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:08:27 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:08:38.007126   54385 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:08:38.007331   54385 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1004 04:08:38.011890   54385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:08:38.026069   54385 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-326061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.20.0 ClusterName:kubernetes-upgrade-326061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:08:38.026198   54385 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 04:08:38.026265   54385 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:08:38.067919   54385 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:08:38.067980   54385 ssh_runner.go:195] Run: which lz4
	I1004 04:08:38.072604   54385 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:08:38.077616   54385 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:08:38.077649   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1004 04:08:39.926253   54385 crio.go:462] duration metric: took 1.853691017s to copy over tarball
	I1004 04:08:39.926351   54385 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:08:42.695203   54385 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.768816598s)
	I1004 04:08:42.695234   54385 crio.go:469] duration metric: took 2.768938542s to extract the tarball
	I1004 04:08:42.695244   54385 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:08:42.739456   54385 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:08:42.786483   54385 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:08:42.786508   54385 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:08:42.786557   54385 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:08:42.786574   54385 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:08:42.786590   54385 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:08:42.786609   54385 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:08:42.786632   54385 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:08:42.786636   54385 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:08:42.786685   54385 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1004 04:08:42.786855   54385 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1004 04:08:42.788089   54385 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1004 04:08:42.788129   54385 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:08:42.788170   54385 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:08:42.788298   54385 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:08:42.788095   54385 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1004 04:08:42.788370   54385 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:08:42.788452   54385 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:08:42.788713   54385 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:08:42.987065   54385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1004 04:08:43.047816   54385 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1004 04:08:43.047859   54385 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1004 04:08:43.047903   54385 ssh_runner.go:195] Run: which crictl
	I1004 04:08:43.053195   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:08:43.059061   54385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1004 04:08:43.098540   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:08:43.120534   54385 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1004 04:08:43.120578   54385 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1004 04:08:43.120623   54385 ssh_runner.go:195] Run: which crictl
	I1004 04:08:43.143646   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:08:43.143762   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:08:43.144497   54385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:08:43.154496   54385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:08:43.157302   54385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:08:43.159369   54385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1004 04:08:43.173532   54385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:08:43.293782   54385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1004 04:08:43.293840   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:08:43.328180   54385 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1004 04:08:43.328237   54385 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:08:43.328286   54385 ssh_runner.go:195] Run: which crictl
	I1004 04:08:43.343700   54385 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1004 04:08:43.343748   54385 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:08:43.343809   54385 ssh_runner.go:195] Run: which crictl
	I1004 04:08:43.343812   54385 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1004 04:08:43.343844   54385 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:08:43.343845   54385 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1004 04:08:43.343886   54385 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1004 04:08:43.343900   54385 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:08:43.343910   54385 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:08:43.343928   54385 ssh_runner.go:195] Run: which crictl
	I1004 04:08:43.343941   54385 ssh_runner.go:195] Run: which crictl
	I1004 04:08:43.343889   54385 ssh_runner.go:195] Run: which crictl
	I1004 04:08:43.365876   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:08:43.365933   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:08:43.365956   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:08:43.365956   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:08:43.366015   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:08:43.366019   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:08:43.464798   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:08:43.504495   54385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1004 04:08:43.504599   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:08:43.504614   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:08:43.504620   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:08:43.504657   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:08:43.558266   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:08:43.646349   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:08:43.646402   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:08:43.646402   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:08:43.646408   54385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:08:43.658794   54385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1004 04:08:43.724865   54385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1004 04:08:43.751893   54385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1004 04:08:43.751903   54385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1004 04:08:43.757311   54385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1004 04:08:44.044314   54385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:08:44.187298   54385 cache_images.go:92] duration metric: took 1.400767499s to LoadCachedImages
	W1004 04:08:44.187370   54385 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1004 04:08:44.187384   54385 kubeadm.go:934] updating node { 192.168.50.58 8443 v1.20.0 crio true true} ...
	I1004 04:08:44.187506   54385 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-326061 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-326061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:08:44.187588   54385 ssh_runner.go:195] Run: crio config
	I1004 04:08:44.238356   54385 cni.go:84] Creating CNI manager for ""
	I1004 04:08:44.238383   54385 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:08:44.238392   54385 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:08:44.238416   54385 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.58 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-326061 NodeName:kubernetes-upgrade-326061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1004 04:08:44.238703   54385 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-326061"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:08:44.238798   54385 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1004 04:08:44.249499   54385 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:08:44.249574   54385 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:08:44.259760   54385 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1004 04:08:44.277528   54385 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:08:44.294780   54385 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1004 04:08:44.312180   54385 ssh_runner.go:195] Run: grep 192.168.50.58	control-plane.minikube.internal$ /etc/hosts
	I1004 04:08:44.316317   54385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:08:44.329612   54385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:08:44.443629   54385 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:08:44.464676   54385 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061 for IP: 192.168.50.58
	I1004 04:08:44.464699   54385 certs.go:194] generating shared ca certs ...
	I1004 04:08:44.464720   54385 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:08:44.464886   54385 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:08:44.464949   54385 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:08:44.464961   54385 certs.go:256] generating profile certs ...
	I1004 04:08:44.465030   54385 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/client.key
	I1004 04:08:44.465054   54385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/client.crt with IP's: []
	I1004 04:08:44.787223   54385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/client.crt ...
	I1004 04:08:44.787259   54385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/client.crt: {Name:mk759b02775a99671fc098f0d272d6ed972b6680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:08:44.787482   54385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/client.key ...
	I1004 04:08:44.787505   54385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/client.key: {Name:mk3e5d0fafba6cb9cd0cb488bb338eba262c374c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:08:44.787608   54385 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/apiserver.key.9c3d728a
	I1004 04:08:44.787626   54385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/apiserver.crt.9c3d728a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.58]
	I1004 04:08:44.892271   54385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/apiserver.crt.9c3d728a ...
	I1004 04:08:44.892305   54385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/apiserver.crt.9c3d728a: {Name:mk4c52651d869c393a6fa4fc0b26d5d1556f3da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:08:44.892464   54385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/apiserver.key.9c3d728a ...
	I1004 04:08:44.892477   54385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/apiserver.key.9c3d728a: {Name:mk0bc39a45483523a51a75cdded795aa8c62680f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:08:44.892547   54385 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/apiserver.crt.9c3d728a -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/apiserver.crt
	I1004 04:08:44.892616   54385 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/apiserver.key.9c3d728a -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/apiserver.key
	I1004 04:08:44.892668   54385 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/proxy-client.key
	I1004 04:08:44.892682   54385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/proxy-client.crt with IP's: []
	I1004 04:08:45.056860   54385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/proxy-client.crt ...
	I1004 04:08:45.056895   54385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/proxy-client.crt: {Name:mk2c77e406779f889b482c803a6971b10ce015a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:08:45.057072   54385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/proxy-client.key ...
	I1004 04:08:45.057085   54385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/proxy-client.key: {Name:mka096bdba34fc235cd5d62066e552f8029906ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:08:45.057237   54385 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:08:45.057275   54385 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:08:45.057283   54385 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:08:45.057308   54385 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:08:45.057327   54385 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:08:45.057347   54385 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:08:45.057384   54385 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:08:45.057917   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:08:45.092512   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:08:45.125131   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:08:45.156718   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:08:45.186211   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1004 04:08:45.215181   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:08:45.245320   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:08:45.275280   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 04:08:45.302597   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:08:45.331044   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:08:45.359820   54385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:08:45.390916   54385 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:08:45.420710   54385 ssh_runner.go:195] Run: openssl version
	I1004 04:08:45.428861   54385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:08:45.440893   54385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:08:45.445887   54385 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:08:45.445944   54385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:08:45.452618   54385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:08:45.464260   54385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:08:45.476812   54385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:08:45.482017   54385 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:08:45.482073   54385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:08:45.488191   54385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:08:45.499925   54385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:08:45.511938   54385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:08:45.517303   54385 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:08:45.517372   54385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:08:45.523831   54385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:08:45.535764   54385 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:08:45.540558   54385 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 04:08:45.540617   54385 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-326061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.20.0 ClusterName:kubernetes-upgrade-326061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:08:45.540702   54385 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:08:45.540760   54385 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:08:45.582711   54385 cri.go:89] found id: ""
	I1004 04:08:45.582788   54385 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:08:45.593102   54385 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:08:45.603543   54385 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:08:45.614434   54385 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:08:45.614462   54385 kubeadm.go:157] found existing configuration files:
	
	I1004 04:08:45.614519   54385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:08:45.625041   54385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:08:45.625117   54385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:08:45.635413   54385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:08:45.645879   54385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:08:45.645947   54385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:08:45.657794   54385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:08:45.669291   54385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:08:45.669377   54385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:08:45.679730   54385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:08:45.690988   54385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:08:45.691052   54385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:08:45.701569   54385 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:08:45.835206   54385 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:08:45.835562   54385 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:08:46.052967   54385 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:08:46.053088   54385 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:08:46.053220   54385 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:08:46.318122   54385 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:08:46.407750   54385 out.go:235]   - Generating certificates and keys ...
	I1004 04:08:46.407920   54385 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:08:46.408024   54385 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:08:46.445623   54385 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 04:08:46.653673   54385 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1004 04:08:46.761140   54385 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1004 04:08:46.899382   54385 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1004 04:08:47.107766   54385 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1004 04:08:47.108087   54385 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-326061 localhost] and IPs [192.168.50.58 127.0.0.1 ::1]
	I1004 04:08:47.341248   54385 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1004 04:08:47.341511   54385 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-326061 localhost] and IPs [192.168.50.58 127.0.0.1 ::1]
	I1004 04:08:47.516781   54385 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 04:08:47.580695   54385 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 04:08:47.853327   54385 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1004 04:08:47.853450   54385 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:08:47.927710   54385 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:08:48.112359   54385 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:08:48.203940   54385 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:08:48.331629   54385 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:08:48.360427   54385 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:08:48.361692   54385 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:08:48.361772   54385 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:08:48.496537   54385 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:08:48.498721   54385 out.go:235]   - Booting up control plane ...
	I1004 04:08:48.498849   54385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:08:48.506657   54385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:08:48.507615   54385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:08:48.508528   54385 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:08:48.513257   54385 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:09:28.505130   54385 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:09:28.505722   54385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:09:28.506011   54385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:09:33.506062   54385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:09:33.506310   54385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:09:43.505422   54385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:09:43.505640   54385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:10:03.505090   54385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:10:03.505378   54385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:10:43.506679   54385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:10:43.506929   54385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:10:43.506963   54385 kubeadm.go:310] 
	I1004 04:10:43.507035   54385 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:10:43.507140   54385 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:10:43.507157   54385 kubeadm.go:310] 
	I1004 04:10:43.507196   54385 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:10:43.507239   54385 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:10:43.507359   54385 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:10:43.507369   54385 kubeadm.go:310] 
	I1004 04:10:43.507495   54385 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:10:43.507539   54385 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:10:43.507596   54385 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:10:43.507606   54385 kubeadm.go:310] 
	I1004 04:10:43.507729   54385 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:10:43.507839   54385 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:10:43.507851   54385 kubeadm.go:310] 
	I1004 04:10:43.507967   54385 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:10:43.508068   54385 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:10:43.508159   54385 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:10:43.508248   54385 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:10:43.508259   54385 kubeadm.go:310] 
	I1004 04:10:43.509544   54385 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:10:43.509685   54385 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:10:43.509796   54385 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1004 04:10:43.509955   54385 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-326061 localhost] and IPs [192.168.50.58 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-326061 localhost] and IPs [192.168.50.58 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-326061 localhost] and IPs [192.168.50.58 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-326061 localhost] and IPs [192.168.50.58 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1004 04:10:43.509998   54385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:10:44.012674   54385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:10:44.027745   54385 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:10:44.040177   54385 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:10:44.040203   54385 kubeadm.go:157] found existing configuration files:
	
	I1004 04:10:44.040257   54385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:10:44.051760   54385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:10:44.051851   54385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:10:44.062435   54385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:10:44.073295   54385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:10:44.073365   54385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:10:44.084277   54385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:10:44.094510   54385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:10:44.094582   54385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:10:44.105475   54385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:10:44.115372   54385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:10:44.115429   54385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:10:44.126450   54385 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:10:44.207411   54385 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:10:44.207475   54385 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:10:44.360290   54385 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:10:44.360472   54385 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:10:44.360614   54385 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:10:44.574741   54385 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:10:44.576673   54385 out.go:235]   - Generating certificates and keys ...
	I1004 04:10:44.576781   54385 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:10:44.576866   54385 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:10:44.576972   54385 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:10:44.577055   54385 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:10:44.577145   54385 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:10:44.577220   54385 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:10:44.577297   54385 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:10:44.577373   54385 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:10:44.577448   54385 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:10:44.578139   54385 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:10:44.578535   54385 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:10:44.578643   54385 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:10:45.356228   54385 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:10:45.413238   54385 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:10:45.648097   54385 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:10:45.935523   54385 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:10:45.956808   54385 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:10:45.958509   54385 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:10:45.958586   54385 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:10:46.158596   54385 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:10:46.160894   54385 out.go:235]   - Booting up control plane ...
	I1004 04:10:46.161020   54385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:10:46.172695   54385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:10:46.174023   54385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:10:46.174953   54385 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:10:46.177511   54385 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:11:26.179915   54385 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:11:26.180441   54385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:11:26.180609   54385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:11:31.181353   54385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:11:31.181563   54385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:11:41.182024   54385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:11:41.182249   54385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:12:01.181603   54385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:12:01.181837   54385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:12:41.181859   54385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:12:41.182084   54385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:12:41.182100   54385 kubeadm.go:310] 
	I1004 04:12:41.182165   54385 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:12:41.182224   54385 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:12:41.182234   54385 kubeadm.go:310] 
	I1004 04:12:41.182302   54385 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:12:41.182366   54385 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:12:41.182510   54385 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:12:41.182532   54385 kubeadm.go:310] 
	I1004 04:12:41.182706   54385 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:12:41.182764   54385 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:12:41.182813   54385 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:12:41.182832   54385 kubeadm.go:310] 
	I1004 04:12:41.182986   54385 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:12:41.183085   54385 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:12:41.183100   54385 kubeadm.go:310] 
	I1004 04:12:41.183255   54385 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:12:41.183373   54385 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:12:41.183471   54385 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:12:41.183577   54385 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:12:41.183591   54385 kubeadm.go:310] 
	I1004 04:12:41.184464   54385 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:12:41.184596   54385 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:12:41.184663   54385 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1004 04:12:41.184729   54385 kubeadm.go:394] duration metric: took 3m55.644118173s to StartCluster
	I1004 04:12:41.184763   54385 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:12:41.184827   54385 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:12:41.233387   54385 cri.go:89] found id: ""
	I1004 04:12:41.233514   54385 logs.go:282] 0 containers: []
	W1004 04:12:41.233541   54385 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:12:41.233556   54385 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:12:41.233653   54385 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:12:41.280756   54385 cri.go:89] found id: ""
	I1004 04:12:41.280790   54385 logs.go:282] 0 containers: []
	W1004 04:12:41.280803   54385 logs.go:284] No container was found matching "etcd"
	I1004 04:12:41.280811   54385 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:12:41.280893   54385 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:12:41.321027   54385 cri.go:89] found id: ""
	I1004 04:12:41.321061   54385 logs.go:282] 0 containers: []
	W1004 04:12:41.321073   54385 logs.go:284] No container was found matching "coredns"
	I1004 04:12:41.321081   54385 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:12:41.321143   54385 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:12:41.357691   54385 cri.go:89] found id: ""
	I1004 04:12:41.357719   54385 logs.go:282] 0 containers: []
	W1004 04:12:41.357729   54385 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:12:41.357735   54385 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:12:41.357784   54385 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:12:41.398707   54385 cri.go:89] found id: ""
	I1004 04:12:41.398734   54385 logs.go:282] 0 containers: []
	W1004 04:12:41.398742   54385 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:12:41.398748   54385 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:12:41.398796   54385 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:12:41.434571   54385 cri.go:89] found id: ""
	I1004 04:12:41.434597   54385 logs.go:282] 0 containers: []
	W1004 04:12:41.434605   54385 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:12:41.434611   54385 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:12:41.434659   54385 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:12:41.471638   54385 cri.go:89] found id: ""
	I1004 04:12:41.471668   54385 logs.go:282] 0 containers: []
	W1004 04:12:41.471686   54385 logs.go:284] No container was found matching "kindnet"
	I1004 04:12:41.471697   54385 logs.go:123] Gathering logs for kubelet ...
	I1004 04:12:41.471713   54385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:12:41.524226   54385 logs.go:123] Gathering logs for dmesg ...
	I1004 04:12:41.524270   54385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:12:41.539066   54385 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:12:41.539098   54385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:12:41.674101   54385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:12:41.674124   54385 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:12:41.674140   54385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:12:41.777014   54385 logs.go:123] Gathering logs for container status ...
	I1004 04:12:41.777047   54385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1004 04:12:41.818059   54385 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1004 04:12:41.818122   54385 out.go:270] * 
	* 
	W1004 04:12:41.818198   54385 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:12:41.818217   54385 out.go:270] * 
	* 
	W1004 04:12:41.819058   54385 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 04:12:41.822001   54385 out.go:201] 
	W1004 04:12:41.823239   54385 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:12:41.823301   54385 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1004 04:12:41.823329   54385 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1004 04:12:41.824771   54385 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-326061 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-326061
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-326061: (2.288846429s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-326061 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-326061 status --format={{.Host}}: exit status 7 (64.2165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-326061 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-326061 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.47064487s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-326061 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-326061 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-326061 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (91.070559ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-326061] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-326061
	    minikube start -p kubernetes-upgrade-326061 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3260612 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-326061 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-326061 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-326061 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (18.209046437s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-04 04:14:06.068843168 +0000 UTC m=+5165.001783723
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-326061 -n kubernetes-upgrade-326061
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-326061 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-326061 logs -n 25: (2.688216568s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p force-systemd-flag-519066          | force-systemd-flag-519066 | jenkins | v1.34.0 | 04 Oct 24 04:09 UTC | 04 Oct 24 04:09 UTC |
	| start   | -p pause-353264 --memory=2048         | pause-353264              | jenkins | v1.34.0 | 04 Oct 24 04:09 UTC | 04 Oct 24 04:11 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-756541 ssh               | cert-options-756541       | jenkins | v1.34.0 | 04 Oct 24 04:10 UTC | 04 Oct 24 04:10 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-756541 -- sudo        | cert-options-756541       | jenkins | v1.34.0 | 04 Oct 24 04:10 UTC | 04 Oct 24 04:10 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-756541                | cert-options-756541       | jenkins | v1.34.0 | 04 Oct 24 04:10 UTC | 04 Oct 24 04:10 UTC |
	| start   | -p stopped-upgrade-389737             | minikube                  | jenkins | v1.26.0 | 04 Oct 24 04:10 UTC | 04 Oct 24 04:11 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p pause-353264                       | pause-353264              | jenkins | v1.34.0 | 04 Oct 24 04:11 UTC | 04 Oct 24 04:12 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-389737 stop           | minikube                  | jenkins | v1.26.0 | 04 Oct 24 04:11 UTC | 04 Oct 24 04:11 UTC |
	| start   | -p stopped-upgrade-389737             | stopped-upgrade-389737    | jenkins | v1.34.0 | 04 Oct 24 04:11 UTC | 04 Oct 24 04:12 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-353264                       | pause-353264              | jenkins | v1.34.0 | 04 Oct 24 04:12 UTC | 04 Oct 24 04:12 UTC |
	| start   | -p NoKubernetes-316059                | NoKubernetes-316059       | jenkins | v1.34.0 | 04 Oct 24 04:12 UTC |                     |
	|         | --no-kubernetes                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-316059                | NoKubernetes-316059       | jenkins | v1.34.0 | 04 Oct 24 04:12 UTC | 04 Oct 24 04:13 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-389737             | stopped-upgrade-389737    | jenkins | v1.34.0 | 04 Oct 24 04:12 UTC | 04 Oct 24 04:12 UTC |
	| stop    | -p kubernetes-upgrade-326061          | kubernetes-upgrade-326061 | jenkins | v1.34.0 | 04 Oct 24 04:12 UTC | 04 Oct 24 04:12 UTC |
	| start   | -p running-upgrade-552490             | minikube                  | jenkins | v1.26.0 | 04 Oct 24 04:12 UTC | 04 Oct 24 04:13 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-326061          | kubernetes-upgrade-326061 | jenkins | v1.34.0 | 04 Oct 24 04:12 UTC | 04 Oct 24 04:13 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-316059                | NoKubernetes-316059       | jenkins | v1.34.0 | 04 Oct 24 04:13 UTC | 04 Oct 24 04:13 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-363290             | cert-expiration-363290    | jenkins | v1.34.0 | 04 Oct 24 04:13 UTC | 04 Oct 24 04:14 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-552490             | running-upgrade-552490    | jenkins | v1.34.0 | 04 Oct 24 04:13 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-326061          | kubernetes-upgrade-326061 | jenkins | v1.34.0 | 04 Oct 24 04:13 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-326061          | kubernetes-upgrade-326061 | jenkins | v1.34.0 | 04 Oct 24 04:13 UTC | 04 Oct 24 04:14 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-316059                | NoKubernetes-316059       | jenkins | v1.34.0 | 04 Oct 24 04:13 UTC | 04 Oct 24 04:13 UTC |
	| start   | -p NoKubernetes-316059                | NoKubernetes-316059       | jenkins | v1.34.0 | 04 Oct 24 04:13 UTC |                     |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-363290             | cert-expiration-363290    | jenkins | v1.34.0 | 04 Oct 24 04:14 UTC | 04 Oct 24 04:14 UTC |
	| start   | -p old-k8s-version-420062             | old-k8s-version-420062    | jenkins | v1.34.0 | 04 Oct 24 04:14 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 04:14:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 04:14:04.109580   61939 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:14:04.109905   61939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:14:04.109918   61939 out.go:358] Setting ErrFile to fd 2...
	I1004 04:14:04.109925   61939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:14:04.110203   61939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:14:04.110967   61939 out.go:352] Setting JSON to false
	I1004 04:14:04.112279   61939 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6989,"bootTime":1728008255,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:14:04.112361   61939 start.go:139] virtualization: kvm guest
	I1004 04:14:04.131897   61939 out.go:177] * [old-k8s-version-420062] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:14:04.133486   61939 notify.go:220] Checking for updates...
	I1004 04:14:04.133538   61939 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:14:04.210406   61939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:14:04.256083   61939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:14:04.278691   61939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:14:04.353154   61939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:14:04.354619   61939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:14:04.356639   61939 config.go:182] Loaded profile config "NoKubernetes-316059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1004 04:14:04.356796   61939 config.go:182] Loaded profile config "kubernetes-upgrade-326061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:14:04.356929   61939 config.go:182] Loaded profile config "running-upgrade-552490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1004 04:14:04.357041   61939 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:14:04.509043   61939 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 04:14:04.511095   61939 start.go:297] selected driver: kvm2
	I1004 04:14:04.511116   61939 start.go:901] validating driver "kvm2" against <nil>
	I1004 04:14:04.511133   61939 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:14:04.512270   61939 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:14:04.512397   61939 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:14:04.534810   61939 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:14:04.534889   61939 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 04:14:04.535223   61939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:14:04.535263   61939 cni.go:84] Creating CNI manager for ""
	I1004 04:14:04.535320   61939 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:14:04.535337   61939 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 04:14:04.535399   61939 start.go:340] cluster config:
	{Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:14:04.535535   61939 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:14:03.115868   61530 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:14:03.128528   61530 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:14:03.150908   61530 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:14:03.151017   61530 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1004 04:14:03.151043   61530 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1004 04:14:03.160365   61530 system_pods.go:59] 5 kube-system pods found
	I1004 04:14:03.160410   61530 system_pods.go:61] "etcd-kubernetes-upgrade-326061" [6df0414b-8fe2-403e-8653-17896bcea225] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:14:03.160422   61530 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-326061" [ab973681-b6ad-4b33-b8c2-93b02965b9e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:14:03.160433   61530 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-326061" [83ab45ca-4cbd-4364-a7cb-7efd0f381b90] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:14:03.160445   61530 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-326061" [b5044107-fffd-4b9d-b364-b40b825dc7c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:14:03.160456   61530 system_pods.go:61] "storage-provisioner" [3af2efab-2e7c-427a-8432-c291c3c6a220] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1004 04:14:03.160466   61530 system_pods.go:74] duration metric: took 9.528899ms to wait for pod list to return data ...
	I1004 04:14:03.160475   61530 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:14:03.164303   61530 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:14:03.164341   61530 node_conditions.go:123] node cpu capacity is 2
	I1004 04:14:03.164353   61530 node_conditions.go:105] duration metric: took 3.868044ms to run NodePressure ...
	I1004 04:14:03.164375   61530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:14:04.520211   61530 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.355812817s)
	I1004 04:14:04.520247   61530 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:14:04.536189   61530 ops.go:34] apiserver oom_adj: -16
	I1004 04:14:04.536212   61530 kubeadm.go:597] duration metric: took 9.159354763s to restartPrimaryControlPlane
	I1004 04:14:04.536222   61530 kubeadm.go:394] duration metric: took 9.266690643s to StartCluster
	I1004 04:14:04.536238   61530 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:14:04.536297   61530 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:14:04.537127   61530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:14:04.537375   61530 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:14:04.589995   61939 out.go:177] * Starting "old-k8s-version-420062" primary control-plane node in "old-k8s-version-420062" cluster
	I1004 04:14:04.537478   61530 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:14:04.537590   61530 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-326061"
	I1004 04:14:04.537608   61530 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-326061"
	W1004 04:14:04.537616   61530 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:14:04.537630   61530 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-326061"
	I1004 04:14:04.537646   61530 host.go:66] Checking if "kubernetes-upgrade-326061" exists ...
	I1004 04:14:04.537650   61530 config.go:182] Loaded profile config "kubernetes-upgrade-326061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:14:04.537656   61530 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-326061"
	I1004 04:14:04.538068   61530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:14:04.538091   61530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:14:04.538112   61530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:14:04.538122   61530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:14:04.558698   61530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
	I1004 04:14:04.558964   61530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40823
	I1004 04:14:04.559200   61530 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:14:04.559895   61530 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:14:04.559971   61530 main.go:141] libmachine: Using API Version  1
	I1004 04:14:04.559994   61530 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:14:04.560362   61530 main.go:141] libmachine: Using API Version  1
	I1004 04:14:04.560380   61530 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:14:04.560470   61530 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:14:04.560765   61530 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:14:04.560945   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetState
	I1004 04:14:04.561040   61530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:14:04.561072   61530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:14:04.565154   61530 kapi.go:59] client config for kubernetes-upgrade-326061: &rest.Config{Host:"https://192.168.50.58:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/client.crt", KeyFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kubernetes-upgrade-326061/client.key", CAFile:"/home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 04:14:04.565493   61530 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-326061"
	W1004 04:14:04.565513   61530 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:14:04.565541   61530 host.go:66] Checking if "kubernetes-upgrade-326061" exists ...
	I1004 04:14:04.565906   61530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:14:04.565955   61530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:14:04.586721   61530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43021
	I1004 04:14:04.587928   61530 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:14:04.589131   61530 main.go:141] libmachine: Using API Version  1
	I1004 04:14:04.589154   61530 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:14:04.589592   61530 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:14:04.589792   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetState
	I1004 04:14:04.590087   61530 out.go:177] * Verifying Kubernetes components...
	I1004 04:14:04.591305   61530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33123
	I1004 04:14:04.591684   61530 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:14:04.592099   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .DriverName
	I1004 04:14:04.592466   61530 main.go:141] libmachine: Using API Version  1
	I1004 04:14:04.592486   61530 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:14:04.593125   61530 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:14:04.593546   61530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:14:04.593577   61530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:14:04.594648   61530 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:14:00.516767   61394 crio.go:462] duration metric: took 2.537868079s to copy over tarball
	I1004 04:14:00.516848   61394 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:14:00.576038   61680 main.go:141] libmachine: (NoKubernetes-316059) DBG | domain NoKubernetes-316059 has defined MAC address 52:54:00:4b:24:a4 in network mk-NoKubernetes-316059
	I1004 04:14:00.576563   61680 main.go:141] libmachine: (NoKubernetes-316059) DBG | unable to find current IP address of domain NoKubernetes-316059 in network mk-NoKubernetes-316059
	I1004 04:14:00.576597   61680 main.go:141] libmachine: (NoKubernetes-316059) DBG | I1004 04:14:00.576535   61718 retry.go:31] will retry after 934.730088ms: waiting for machine to come up
	I1004 04:14:01.513205   61680 main.go:141] libmachine: (NoKubernetes-316059) DBG | domain NoKubernetes-316059 has defined MAC address 52:54:00:4b:24:a4 in network mk-NoKubernetes-316059
	I1004 04:14:01.513836   61680 main.go:141] libmachine: (NoKubernetes-316059) DBG | unable to find current IP address of domain NoKubernetes-316059 in network mk-NoKubernetes-316059
	I1004 04:14:01.513876   61680 main.go:141] libmachine: (NoKubernetes-316059) DBG | I1004 04:14:01.513751   61718 retry.go:31] will retry after 1.147438736s: waiting for machine to come up
	I1004 04:14:02.662536   61680 main.go:141] libmachine: (NoKubernetes-316059) DBG | domain NoKubernetes-316059 has defined MAC address 52:54:00:4b:24:a4 in network mk-NoKubernetes-316059
	I1004 04:14:02.663426   61680 main.go:141] libmachine: (NoKubernetes-316059) DBG | unable to find current IP address of domain NoKubernetes-316059 in network mk-NoKubernetes-316059
	I1004 04:14:02.663441   61680 main.go:141] libmachine: (NoKubernetes-316059) DBG | I1004 04:14:02.663346   61718 retry.go:31] will retry after 1.474043605s: waiting for machine to come up
	I1004 04:14:04.139190   61680 main.go:141] libmachine: (NoKubernetes-316059) DBG | domain NoKubernetes-316059 has defined MAC address 52:54:00:4b:24:a4 in network mk-NoKubernetes-316059
	I1004 04:14:04.139646   61680 main.go:141] libmachine: (NoKubernetes-316059) DBG | unable to find current IP address of domain NoKubernetes-316059 in network mk-NoKubernetes-316059
	I1004 04:14:04.139667   61680 main.go:141] libmachine: (NoKubernetes-316059) DBG | I1004 04:14:04.139624   61718 retry.go:31] will retry after 1.807884473s: waiting for machine to come up
	I1004 04:14:04.595897   61530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:14:04.611789   61530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42387
	I1004 04:14:04.612404   61530 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:14:04.612924   61530 main.go:141] libmachine: Using API Version  1
	I1004 04:14:04.612950   61530 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:14:04.613336   61530 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:14:04.613573   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetState
	I1004 04:14:04.615294   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .DriverName
	I1004 04:14:04.615520   61530 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:14:04.615538   61530 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:14:04.615561   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHHostname
	I1004 04:14:04.618912   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:14:04.619616   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:13:21 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:14:04.619641   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:14:04.619843   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHPort
	I1004 04:14:04.620023   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:14:04.620192   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHUsername
	I1004 04:14:04.620465   61530 sshutil.go:53] new ssh client: &{IP:192.168.50.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061/id_rsa Username:docker}
	I1004 04:14:04.672734   61530 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:14:04.672763   61530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:14:04.672790   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHHostname
	I1004 04:14:04.676923   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:14:04.677533   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:71:c1", ip: ""} in network mk-kubernetes-upgrade-326061: {Iface:virbr2 ExpiryTime:2024-10-04 05:13:21 +0000 UTC Type:0 Mac:52:54:00:ca:71:c1 Iaid: IPaddr:192.168.50.58 Prefix:24 Hostname:kubernetes-upgrade-326061 Clientid:01:52:54:00:ca:71:c1}
	I1004 04:14:04.677563   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | domain kubernetes-upgrade-326061 has defined IP address 192.168.50.58 and MAC address 52:54:00:ca:71:c1 in network mk-kubernetes-upgrade-326061
	I1004 04:14:04.677773   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHPort
	I1004 04:14:04.677953   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHKeyPath
	I1004 04:14:04.678107   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .GetSSHUsername
	I1004 04:14:04.678228   61530 sshutil.go:53] new ssh client: &{IP:192.168.50.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/kubernetes-upgrade-326061/id_rsa Username:docker}
	I1004 04:14:04.816032   61530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:14:04.834318   61530 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:14:04.834425   61530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:14:04.852352   61530 api_server.go:72] duration metric: took 314.93859ms to wait for apiserver process to appear ...
	I1004 04:14:04.852382   61530 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:14:04.852406   61530 api_server.go:253] Checking apiserver healthz at https://192.168.50.58:8443/healthz ...
	I1004 04:14:04.869013   61530 api_server.go:279] https://192.168.50.58:8443/healthz returned 200:
	ok
	I1004 04:14:04.870268   61530 api_server.go:141] control plane version: v1.31.1
	I1004 04:14:04.870294   61530 api_server.go:131] duration metric: took 17.904102ms to wait for apiserver health ...
	I1004 04:14:04.870306   61530 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:14:04.874908   61530 system_pods.go:59] 5 kube-system pods found
	I1004 04:14:04.874954   61530 system_pods.go:61] "etcd-kubernetes-upgrade-326061" [6df0414b-8fe2-403e-8653-17896bcea225] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:14:04.874969   61530 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-326061" [ab973681-b6ad-4b33-b8c2-93b02965b9e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:14:04.874984   61530 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-326061" [83ab45ca-4cbd-4364-a7cb-7efd0f381b90] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:14:04.874999   61530 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-326061" [b5044107-fffd-4b9d-b364-b40b825dc7c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:14:04.875006   61530 system_pods.go:61] "storage-provisioner" [3af2efab-2e7c-427a-8432-c291c3c6a220] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1004 04:14:04.875015   61530 system_pods.go:74] duration metric: took 4.700479ms to wait for pod list to return data ...
	I1004 04:14:04.875031   61530 kubeadm.go:582] duration metric: took 337.62303ms to wait for: map[apiserver:true system_pods:true]
	I1004 04:14:04.875048   61530 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:14:04.878961   61530 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:14:04.878987   61530 node_conditions.go:123] node cpu capacity is 2
	I1004 04:14:04.878997   61530 node_conditions.go:105] duration metric: took 3.943561ms to run NodePressure ...
	I1004 04:14:04.879010   61530 start.go:241] waiting for startup goroutines ...
	I1004 04:14:04.972949   61530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:14:05.011219   61530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:14:05.296318   61530 main.go:141] libmachine: Making call to close driver server
	I1004 04:14:05.296350   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .Close
	I1004 04:14:05.296652   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Closing plugin on server side
	I1004 04:14:05.296726   61530 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:14:05.296744   61530 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:14:05.296758   61530 main.go:141] libmachine: Making call to close driver server
	I1004 04:14:05.296768   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .Close
	I1004 04:14:05.298724   61530 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:14:05.298744   61530 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:14:05.298783   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Closing plugin on server side
	I1004 04:14:05.331895   61530 main.go:141] libmachine: Making call to close driver server
	I1004 04:14:05.331923   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .Close
	I1004 04:14:05.332236   61530 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:14:05.332258   61530 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:14:05.970497   61530 main.go:141] libmachine: Making call to close driver server
	I1004 04:14:05.970533   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .Close
	I1004 04:14:05.970935   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Closing plugin on server side
	I1004 04:14:05.970953   61530 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:14:05.970975   61530 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:14:05.970989   61530 main.go:141] libmachine: Making call to close driver server
	I1004 04:14:05.971000   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) Calling .Close
	I1004 04:14:05.971223   61530 main.go:141] libmachine: (kubernetes-upgrade-326061) DBG | Closing plugin on server side
	I1004 04:14:05.971252   61530 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:14:05.971272   61530 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:14:05.974162   61530 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1004 04:14:05.976119   61530 addons.go:510] duration metric: took 1.438642159s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1004 04:14:05.976179   61530 start.go:246] waiting for cluster config update ...
	I1004 04:14:05.976195   61530 start.go:255] writing updated cluster config ...
	I1004 04:14:05.976521   61530 ssh_runner.go:195] Run: rm -f paused
	I1004 04:14:06.047651   61530 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:14:06.049397   61530 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-326061" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.410771401Z" level=debug msg="Initializing stage for resource k8s_coredns-7c65d6cfc9-97j6m_kube-system_b7be7498-2f95-4eea-a730-d4864d5bd495_0 to sandbox creating" file="resourcestore/resourcestore.go:219" id=e3e75148-b74b-4ef7-b6ae-3f114967d248 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.410813956Z" level=debug msg="Setting stage for resource k8s_coredns-7c65d6cfc9-97j6m_kube-system_b7be7498-2f95-4eea-a730-d4864d5bd495_0 from sandbox creating to sandbox network ready" file="resourcestore/resourcestore.go:227" id=e3e75148-b74b-4ef7-b6ae-3f114967d248 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.410885517Z" level=warning msg="Allowed annotations are specified for workload []" file="config/workloads.go:100"
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.410935256Z" level=debug msg="Setting stage for resource k8s_coredns-7c65d6cfc9-97j6m_kube-system_b7be7498-2f95-4eea-a730-d4864d5bd495_0 from sandbox network ready to sandbox storage creation" file="resourcestore/resourcestore.go:227" id=e3e75148-b74b-4ef7-b6ae-3f114967d248 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.509378173Z" level=debug msg="Created pod sandbox \"507592f149a278d5b3e7a0d8809ce5cb70722f652034583ed12fc7fd8c563479\"" file="storage/runtime.go:239"
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.509969793Z" level=debug msg="Pod sandbox \"507592f149a278d5b3e7a0d8809ce5cb70722f652034583ed12fc7fd8c563479\" has work directory \"/var/lib/containers/storage/overlay-containers/507592f149a278d5b3e7a0d8809ce5cb70722f652034583ed12fc7fd8c563479/userdata\"" file="storage/runtime.go:274"
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.510108792Z" level=debug msg="Pod sandbox \"507592f149a278d5b3e7a0d8809ce5cb70722f652034583ed12fc7fd8c563479\" has run directory \"/var/run/containers/storage/overlay-containers/507592f149a278d5b3e7a0d8809ce5cb70722f652034583ed12fc7fd8c563479/userdata\"" file="storage/runtime.go:284"
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.510261375Z" level=debug msg="exporting opaque data as blob \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" file="storage/storage_src.go:115"
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.510322620Z" level=debug msg="Setting stage for resource k8s_coredns-7c65d6cfc9-6x9ls_kube-system_4c857c9c-5c9d-410b-8aaf-871b73aa6019_0 from sandbox storage creation to sandbox shm creation" file="resourcestore/resourcestore.go:227" id=4ac20485-5e97-486c-8cdc-e8884edd5140 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.514142603Z" level=debug msg="Setting stage for resource k8s_coredns-7c65d6cfc9-6x9ls_kube-system_4c857c9c-5c9d-410b-8aaf-871b73aa6019_0 from sandbox shm creation to sandbox spec configuration" file="resourcestore/resourcestore.go:227" id=4ac20485-5e97-486c-8cdc-e8884edd5140 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.518384631Z" level=debug msg="Created pod sandbox \"e6cdb88b79d894c069268b26feac09f9ea37e30ca2095abb0e87f966d0e241d3\"" file="storage/runtime.go:239"
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.519429262Z" level=debug msg="Pod sandbox \"e6cdb88b79d894c069268b26feac09f9ea37e30ca2095abb0e87f966d0e241d3\" has work directory \"/var/lib/containers/storage/overlay-containers/e6cdb88b79d894c069268b26feac09f9ea37e30ca2095abb0e87f966d0e241d3/userdata\"" file="storage/runtime.go:274"
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.519617164Z" level=debug msg="Pod sandbox \"e6cdb88b79d894c069268b26feac09f9ea37e30ca2095abb0e87f966d0e241d3\" has run directory \"/var/run/containers/storage/overlay-containers/e6cdb88b79d894c069268b26feac09f9ea37e30ca2095abb0e87f966d0e241d3/userdata\"" file="storage/runtime.go:284"
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.519987873Z" level=debug msg="Setting stage for resource k8s_coredns-7c65d6cfc9-97j6m_kube-system_b7be7498-2f95-4eea-a730-d4864d5bd495_0 from sandbox storage creation to sandbox shm creation" file="resourcestore/resourcestore.go:227" id=e3e75148-b74b-4ef7-b6ae-3f114967d248 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.521297636Z" level=debug msg="Setting stage for resource k8s_coredns-7c65d6cfc9-97j6m_kube-system_b7be7498-2f95-4eea-a730-d4864d5bd495_0 from sandbox shm creation to sandbox spec configuration" file="resourcestore/resourcestore.go:227" id=e3e75148-b74b-4ef7-b6ae-3f114967d248 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.522164380Z" level=debug msg="Setting stage for resource k8s_coredns-7c65d6cfc9-6x9ls_kube-system_4c857c9c-5c9d-410b-8aaf-871b73aa6019_0 from sandbox spec configuration to sandbox namespace creation" file="resourcestore/resourcestore.go:227" id=4ac20485-5e97-486c-8cdc-e8884edd5140 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.522338774Z" level=debug msg="Calling pinns with [-d /var/run -f 08c7b3d6-fe5e-4d7d-b3fc-74c98bb34ce3 -s net.ipv4.ip_unprivileged_port_start=0 --ipc --net --uts]" file="nsmgr/nsmgr_linux.go:121"
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.522655249Z" level=debug msg="Setting stage for resource k8s_coredns-7c65d6cfc9-97j6m_kube-system_b7be7498-2f95-4eea-a730-d4864d5bd495_0 from sandbox spec configuration to sandbox namespace creation" file="resourcestore/resourcestore.go:227" id=e3e75148-b74b-4ef7-b6ae-3f114967d248 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.522789273Z" level=debug msg="Calling pinns with [-d /var/run -f 8cfb3a39-2245-4dd1-b22d-c560be407bce -s net.ipv4.ip_unprivileged_port_start=0 --ipc --net --uts]" file="nsmgr/nsmgr_linux.go:121"
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.532359473Z" level=debug msg="Setting stage for resource k8s_coredns-7c65d6cfc9-6x9ls_kube-system_4c857c9c-5c9d-410b-8aaf-871b73aa6019_0 from sandbox namespace creation to sandbox network creation" file="resourcestore/resourcestore.go:227" id=4ac20485-5e97-486c-8cdc-e8884edd5140 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.532677985Z" level=debug msg="Setting stage for resource k8s_coredns-7c65d6cfc9-97j6m_kube-system_b7be7498-2f95-4eea-a730-d4864d5bd495_0 from sandbox namespace creation to sandbox network creation" file="resourcestore/resourcestore.go:227" id=e3e75148-b74b-4ef7-b6ae-3f114967d248 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.534611419Z" level=info msg="Got pod network &{Name:coredns-7c65d6cfc9-97j6m Namespace:kube-system ID:e6cdb88b79d894c069268b26feac09f9ea37e30ca2095abb0e87f966d0e241d3 UID:b7be7498-2f95-4eea-a730-d4864d5bd495 NetNS:/var/run/netns/8cfb3a39-2245-4dd1-b22d-c560be407bce Networks:[] RuntimeConfig:map[bridge:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:/kubepods/burstable/podb7be7498-2f95-4eea-a730-d4864d5bd495 PodAnnotations:0xc00011f368}] Aliases:map[]}" file="ocicni/ocicni.go:795"
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.534700514Z" level=info msg="Adding pod kube-system_coredns-7c65d6cfc9-97j6m to CNI network \"bridge\" (type=bridge)" file="ocicni/ocicni.go:556"
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.534738121Z" level=info msg="Got pod network &{Name:coredns-7c65d6cfc9-6x9ls Namespace:kube-system ID:507592f149a278d5b3e7a0d8809ce5cb70722f652034583ed12fc7fd8c563479 UID:4c857c9c-5c9d-410b-8aaf-871b73aa6019 NetNS:/var/run/netns/08c7b3d6-fe5e-4d7d-b3fc-74c98bb34ce3 Networks:[] RuntimeConfig:map[bridge:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:/kubepods/burstable/pod4c857c9c-5c9d-410b-8aaf-871b73aa6019 PodAnnotations:0xc000568df0}] Aliases:map[]}" file="ocicni/ocicni.go:795"
	Oct 04 04:14:07 kubernetes-upgrade-326061 crio[1846]: time="2024-10-04 04:14:07.534792707Z" level=info msg="Adding pod kube-system_coredns-7c65d6cfc9-6x9ls to CNI network \"bridge\" (type=bridge)" file="ocicni/ocicni.go:556"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	645e35cd03c5b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   1 second ago        Running             kube-proxy                0                   6dce387804bee       kube-proxy-7xqx2
	8759417fe0284       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   1 second ago        Running             storage-provisioner       0                   b85cde4182ee3       storage-provisioner
	a5212a6583bbc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   10 seconds ago      Running             etcd                      2                   9f44c89a13581       etcd-kubernetes-upgrade-326061
	3b02e588d3950       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   10 seconds ago      Running             kube-controller-manager   2                   716a87a20afbc       kube-controller-manager-kubernetes-upgrade-326061
	2f1c098736b92       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   10 seconds ago      Running             kube-scheduler            2                   e93906800779b       kube-scheduler-kubernetes-upgrade-326061
	4af430c3e199c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   10 seconds ago      Running             kube-apiserver            2                   048ed929e8c4e       kube-apiserver-kubernetes-upgrade-326061
	b0d32e37c2c4c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 seconds ago      Exited              kube-scheduler            1                   12726f30881ab       kube-scheduler-kubernetes-upgrade-326061
	601742c948792       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 seconds ago      Exited              etcd                      1                   2877a324d8786       etcd-kubernetes-upgrade-326061
	2cc255788139f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 seconds ago      Exited              kube-apiserver            1                   66b0a376c9a34       kube-apiserver-kubernetes-upgrade-326061
	5593d402a9ae5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 seconds ago      Exited              kube-controller-manager   1                   48a01dfcf7a8a       kube-controller-manager-kubernetes-upgrade-326061
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-326061
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-326061
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 04:13:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-326061
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 04:14:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 04:14:01 +0000   Fri, 04 Oct 2024 04:13:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 04:14:01 +0000   Fri, 04 Oct 2024 04:13:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 04:14:01 +0000   Fri, 04 Oct 2024 04:13:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 04:14:01 +0000   Fri, 04 Oct 2024 04:13:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.58
	  Hostname:    kubernetes-upgrade-326061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66145311e94d459ba6f6644faae914be
	  System UUID:                66145311-e94d-459b-a6f6-644faae914be
	  Boot ID:                    e0a1470b-68a4-406e-b266-7b6cca0c298f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6x9ls                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2s
	  kube-system                 coredns-7c65d6cfc9-97j6m                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2s
	  kube-system                 etcd-kubernetes-upgrade-326061                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         24s
	  kube-system                 kube-apiserver-kubernetes-upgrade-326061             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-326061    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-proxy-7xqx2                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-kubernetes-upgrade-326061             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 0s                 kube-proxy       
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s (x8 over 29s)  kubelet          Node kubernetes-upgrade-326061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x8 over 29s)  kubelet          Node kubernetes-upgrade-326061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x7 over 29s)  kubelet          Node kubernetes-upgrade-326061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node kubernetes-upgrade-326061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node kubernetes-upgrade-326061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x7 over 11s)  kubelet          Node kubernetes-upgrade-326061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node kubernetes-upgrade-326061 event: Registered Node kubernetes-upgrade-326061 in Controller
	
	
	==> dmesg <==
	[  +1.685956] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.163889] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.076307] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060003] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.237675] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.142413] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.333949] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +4.618194] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +0.069828] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.376082] systemd-fstab-generator[846]: Ignoring "noauto" option for root device
	[  +7.381667] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.141693] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.530580] systemd-fstab-generator[1768]: Ignoring "noauto" option for root device
	[  +0.109094] kauditd_printk_skb: 58 callbacks suppressed
	[  +0.082269] systemd-fstab-generator[1780]: Ignoring "noauto" option for root device
	[  +0.240362] systemd-fstab-generator[1796]: Ignoring "noauto" option for root device
	[  +0.165703] systemd-fstab-generator[1808]: Ignoring "noauto" option for root device
	[  +0.404533] systemd-fstab-generator[1836]: Ignoring "noauto" option for root device
	[  +1.265266] systemd-fstab-generator[2166]: Ignoring "noauto" option for root device
	[  +2.216837] systemd-fstab-generator[2290]: Ignoring "noauto" option for root device
	[  +0.836209] kauditd_printk_skb: 187 callbacks suppressed
	[Oct 4 04:14] systemd-fstab-generator[2557]: Ignoring "noauto" option for root device
	[  +0.123262] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [601742c948792dcd4954dc5c50b52ff441c1e18040f923ac2e025c81783f7f49] <==
	{"level":"info","ts":"2024-10-04T04:13:53.378465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b223154dc276ce12 became leader at term 3"}
	{"level":"info","ts":"2024-10-04T04:13:53.378497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b223154dc276ce12 elected leader b223154dc276ce12 at term 3"}
	{"level":"info","ts":"2024-10-04T04:13:53.397212Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b223154dc276ce12","local-member-attributes":"{Name:kubernetes-upgrade-326061 ClientURLs:[https://192.168.50.58:2379]}","request-path":"/0/members/b223154dc276ce12/attributes","cluster-id":"94a432db9bee1c6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T04:13:53.398144Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:13:53.405011Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T04:13:53.406199Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T04:13:53.397567Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:13:53.418720Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:13:53.419498Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.58:2379"}
	{"level":"info","ts":"2024-10-04T04:13:53.436609Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-04T04:13:53.436674Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-326061","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.58:2380"],"advertise-client-urls":["https://192.168.50.58:2379"]}
	{"level":"info","ts":"2024-10-04T04:13:53.439836Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:13:53.452018Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-10-04T04:13:53.452259Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T04:13:53.452300Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T04:13:53.452355Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59328","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:59328: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T04:13:53.452431Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.58:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T04:13:53.452455Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.58:2379: use of closed network connection"}
	2024/10/04 04:13:53 WARNING: [core] [Channel #4 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	2024/10/04 04:13:53 WARNING: [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "192.168.50.58:2379", ServerName: "192.168.50.58:2379", }. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	{"level":"warn","ts":"2024-10-04T04:13:53.461654Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.50.58:48588","server-name":"","error":"write tcp 192.168.50.58:2379->192.168.50.58:48588: use of closed network connection"}
	{"level":"info","ts":"2024-10-04T04:13:53.461904Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b223154dc276ce12","current-leader-member-id":"b223154dc276ce12"}
	{"level":"info","ts":"2024-10-04T04:13:53.468747Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.58:2380"}
	{"level":"info","ts":"2024-10-04T04:13:53.468878Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.58:2380"}
	{"level":"info","ts":"2024-10-04T04:13:53.468889Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-326061","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.58:2380"],"advertise-client-urls":["https://192.168.50.58:2379"]}
	
	
	==> etcd [a5212a6583bbc26d5055bf33f259753f8abfff079a8f6c958ab3b9e7fa4e46b2] <==
	{"level":"info","ts":"2024-10-04T04:13:59.672781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T04:13:59.673979Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.58:2379"}
	{"level":"info","ts":"2024-10-04T04:14:03.421010Z","caller":"traceutil/trace.go:171","msg":"trace[990832128] transaction","detail":"{read_only:false; number_of_response:0; response_revision:308; }","duration":"112.27969ms","start":"2024-10-04T04:14:03.308711Z","end":"2024-10-04T04:14:03.420990Z","steps":["trace[990832128] 'process raft request'  (duration: 112.198404ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T04:14:03.557404Z","caller":"traceutil/trace.go:171","msg":"trace[1895685961] transaction","detail":"{read_only:false; number_of_response:0; response_revision:308; }","duration":"128.099509ms","start":"2024-10-04T04:14:03.429243Z","end":"2024-10-04T04:14:03.557343Z","steps":["trace[1895685961] 'process raft request'  (duration: 128.007872ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T04:14:03.901801Z","caller":"traceutil/trace.go:171","msg":"trace[1772877187] linearizableReadLoop","detail":"{readStateIndex:326; appliedIndex:325; }","duration":"175.122827ms","start":"2024-10-04T04:14:03.726659Z","end":"2024-10-04T04:14:03.901782Z","steps":["trace[1772877187] 'read index received'  (duration: 174.99981ms)","trace[1772877187] 'applied index is now lower than readState.Index'  (duration: 122.306µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T04:14:03.901834Z","caller":"traceutil/trace.go:171","msg":"trace[2050325674] transaction","detail":"{read_only:false; number_of_response:0; response_revision:308; }","duration":"239.244187ms","start":"2024-10-04T04:14:03.662572Z","end":"2024-10-04T04:14:03.901816Z","steps":["trace[2050325674] 'process raft request'  (duration: 239.134056ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T04:14:03.901986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.283782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-10-04T04:14:03.902323Z","caller":"traceutil/trace.go:171","msg":"trace[1321128266] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:308; }","duration":"175.653429ms","start":"2024-10-04T04:14:03.726654Z","end":"2024-10-04T04:14:03.902308Z","steps":["trace[1321128266] 'agreement among raft nodes before linearized reading'  (duration: 175.255803ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T04:14:04.030678Z","caller":"traceutil/trace.go:171","msg":"trace[1911520390] linearizableReadLoop","detail":"{readStateIndex:327; appliedIndex:326; }","duration":"106.648931ms","start":"2024-10-04T04:14:03.924016Z","end":"2024-10-04T04:14:04.030665Z","steps":["trace[1911520390] 'read index received'  (duration: 106.562043ms)","trace[1911520390] 'applied index is now lower than readState.Index'  (duration: 86.501µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T04:14:04.030742Z","caller":"traceutil/trace.go:171","msg":"trace[663655848] transaction","detail":"{read_only:false; number_of_response:0; response_revision:308; }","duration":"107.139492ms","start":"2024-10-04T04:14:03.923540Z","end":"2024-10-04T04:14:04.030680Z","steps":["trace[663655848] 'process raft request'  (duration: 107.08268ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T04:14:04.030846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.809207ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" ","response":"range_response_count:1 size:232"}
	{"level":"info","ts":"2024-10-04T04:14:04.030906Z","caller":"traceutil/trace.go:171","msg":"trace[908723038] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslicemirroring-controller; range_end:; response_count:1; response_revision:308; }","duration":"106.888768ms","start":"2024-10-04T04:14:03.924009Z","end":"2024-10-04T04:14:04.030898Z","steps":["trace[908723038] 'agreement among raft nodes before linearized reading'  (duration: 106.772637ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T04:14:04.291960Z","caller":"traceutil/trace.go:171","msg":"trace[1035109439] linearizableReadLoop","detail":"{readStateIndex:329; appliedIndex:328; }","duration":"139.341818ms","start":"2024-10-04T04:14:04.152600Z","end":"2024-10-04T04:14:04.291942Z","steps":["trace[1035109439] 'read index received'  (duration: 139.199687ms)","trace[1035109439] 'applied index is now lower than readState.Index'  (duration: 141.461µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T04:14:04.292132Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.508919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-10-04T04:14:04.292172Z","caller":"traceutil/trace.go:171","msg":"trace[636929450] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:308; }","duration":"139.570463ms","start":"2024-10-04T04:14:04.152595Z","end":"2024-10-04T04:14:04.292166Z","steps":["trace[636929450] 'agreement among raft nodes before linearized reading'  (duration: 139.482682ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T04:14:04.292278Z","caller":"traceutil/trace.go:171","msg":"trace[167227662] transaction","detail":"{read_only:false; number_of_response:0; response_revision:308; }","duration":"141.810523ms","start":"2024-10-04T04:14:04.150449Z","end":"2024-10-04T04:14:04.292260Z","steps":["trace[167227662] 'process raft request'  (duration: 141.419611ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T04:14:04.502613Z","caller":"traceutil/trace.go:171","msg":"trace[414879490] transaction","detail":"{read_only:false; number_of_response:0; response_revision:308; }","duration":"103.239339ms","start":"2024-10-04T04:14:04.399353Z","end":"2024-10-04T04:14:04.502592Z","steps":["trace[414879490] 'process raft request'  (duration: 103.134561ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T04:14:07.649340Z","caller":"traceutil/trace.go:171","msg":"trace[1492734989] linearizableReadLoop","detail":"{readStateIndex:432; appliedIndex:431; }","duration":"378.002818ms","start":"2024-10-04T04:14:07.271288Z","end":"2024-10-04T04:14:07.649291Z","steps":["trace[1492734989] 'read index received'  (duration: 377.589807ms)","trace[1492734989] 'applied index is now lower than readState.Index'  (duration: 412.229µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T04:14:07.649500Z","caller":"traceutil/trace.go:171","msg":"trace[490893910] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"397.87429ms","start":"2024-10-04T04:14:07.251609Z","end":"2024-10-04T04:14:07.649483Z","steps":["trace[490893910] 'process raft request'  (duration: 397.36106ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T04:14:07.649733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.193226ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T04:14:07.649776Z","caller":"traceutil/trace.go:171","msg":"trace[1643623110] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:405; }","duration":"237.247856ms","start":"2024-10-04T04:14:07.412514Z","end":"2024-10-04T04:14:07.649762Z","steps":["trace[1643623110] 'agreement among raft nodes before linearized reading'  (duration: 237.171099ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T04:14:07.649901Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"378.612346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/kubernetes-upgrade-326061\" ","response":"range_response_count:1 size:4574"}
	{"level":"info","ts":"2024-10-04T04:14:07.649925Z","caller":"traceutil/trace.go:171","msg":"trace[1781865012] range","detail":"{range_begin:/registry/minions/kubernetes-upgrade-326061; range_end:; response_count:1; response_revision:405; }","duration":"378.636536ms","start":"2024-10-04T04:14:07.271282Z","end":"2024-10-04T04:14:07.649918Z","steps":["trace[1781865012] 'agreement among raft nodes before linearized reading'  (duration: 378.592899ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T04:14:07.649939Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T04:14:07.271222Z","time spent":"378.713447ms","remote":"127.0.0.1:42116","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":4597,"request content":"key:\"/registry/minions/kubernetes-upgrade-326061\" "}
	{"level":"warn","ts":"2024-10-04T04:14:07.650269Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T04:14:07.251575Z","time spent":"397.949578ms","remote":"127.0.0.1:42128","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4800,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-7xqx2\" mod_revision:384 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-7xqx2\" value_size:4749 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-7xqx2\" > >"}
	
	
	==> kernel <==
	 04:14:08 up 0 min,  0 users,  load average: 1.85, 0.47, 0.16
	Linux kubernetes-upgrade-326061 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2cc255788139f0eb1500d97aa5215bf7d3b9cb834284dcff76ce507c481b79e5] <==
	I1004 04:13:52.075721       1 options.go:228] external host was not specified, using 192.168.50.58
	I1004 04:13:52.081619       1 server.go:142] Version: v1.31.1
	I1004 04:13:52.081694       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [4af430c3e199ca6c79a5a1b2a9542b4eb8606fbc86d23ca9ad476e5c49ba34be] <==
	I1004 04:14:01.819326       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 04:14:01.834232       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1004 04:14:01.851421       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1004 04:14:01.851755       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1004 04:14:01.851981       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 04:14:01.854603       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1004 04:14:01.860833       1 shared_informer.go:320] Caches are synced for configmaps
	I1004 04:14:01.873958       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 04:14:01.874139       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 04:14:01.880341       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1004 04:14:01.880715       1 policy_source.go:224] refreshing policies
	I1004 04:14:01.907854       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 04:14:01.909002       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1004 04:14:01.919623       1 cache.go:39] Caches are synced for autoregister controller
	E1004 04:14:01.923144       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1004 04:14:02.656282       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 04:14:03.661894       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 04:14:03.922585       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 04:14:04.149701       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 04:14:04.375500       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 04:14:04.398489       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 04:14:05.334832       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 04:14:05.926913       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1004 04:14:05.999821       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 04:14:06.028982       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3b02e588d395010ea4c94064e692f5de75f6f4af2ac5378b1b0add9ecc43d3b7] <==
	I1004 04:14:05.257587       1 shared_informer.go:320] Caches are synced for service account
	I1004 04:14:05.258119       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1004 04:14:05.260754       1 shared_informer.go:320] Caches are synced for crt configmap
	I1004 04:14:05.269307       1 shared_informer.go:320] Caches are synced for daemon sets
	I1004 04:14:05.276899       1 shared_informer.go:320] Caches are synced for disruption
	I1004 04:14:05.276978       1 shared_informer.go:320] Caches are synced for stateful set
	I1004 04:14:05.277018       1 shared_informer.go:320] Caches are synced for GC
	I1004 04:14:05.309516       1 shared_informer.go:320] Caches are synced for deployment
	I1004 04:14:05.310468       1 shared_informer.go:320] Caches are synced for attach detach
	I1004 04:14:05.314603       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="kubernetes-upgrade-326061" podCIDRs=["10.244.0.0/24"]
	I1004 04:14:05.316638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-326061"
	I1004 04:14:05.316822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-326061"
	I1004 04:14:05.335981       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1004 04:14:05.351439       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1004 04:14:05.351514       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-326061"
	I1004 04:14:05.417223       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 04:14:05.430693       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 04:14:05.740235       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-326061"
	I1004 04:14:05.862581       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 04:14:05.862694       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 04:14:05.898828       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 04:14:06.188372       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="173.772322ms"
	I1004 04:14:06.244498       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.023093ms"
	I1004 04:14:06.244615       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="60.032µs"
	I1004 04:14:06.263415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.906µs"
	
	
	==> kube-controller-manager [5593d402a9ae5aa1716d04c5a82325a68bb7bf87669dfd2f906326b70b143e2e] <==
	I1004 04:13:53.002232       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-proxy [645e35cd03c5b127cb6765903fc6c0a6f6a9efe899decb9eaee833b2fee4b482] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 04:14:07.255541       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 04:14:07.669227       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.58"]
	E1004 04:14:07.669363       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 04:14:07.860644       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 04:14:07.860683       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 04:14:07.860732       1 server_linux.go:169] "Using iptables Proxier"
	I1004 04:14:07.865624       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 04:14:07.865862       1 server.go:483] "Version info" version="v1.31.1"
	I1004 04:14:07.865875       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:14:07.871115       1 config.go:199] "Starting service config controller"
	I1004 04:14:07.871145       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 04:14:07.871216       1 config.go:105] "Starting endpoint slice config controller"
	I1004 04:14:07.871221       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 04:14:07.872428       1 config.go:328] "Starting node config controller"
	I1004 04:14:07.872491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 04:14:07.971830       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 04:14:07.971956       1 shared_informer.go:320] Caches are synced for service config
	I1004 04:14:07.972859       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2f1c098736b927ced4fffff95f7f3e890d03d73d42fbd3a121e014458ca032fa] <==
	I1004 04:13:58.674851       1 serving.go:386] Generated self-signed cert in-memory
	W1004 04:14:01.751644       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 04:14:01.751761       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 04:14:01.751796       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 04:14:01.751827       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 04:14:01.878337       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1004 04:14:01.878443       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:14:01.888974       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1004 04:14:01.889396       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 04:14:01.889495       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 04:14:01.889697       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1004 04:14:01.990018       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b0d32e37c2c4c8f4080ded2e3c11a9d825bfb1fe7558f331d8954737f5d33fde] <==
	
	
	==> kubelet <==
	Oct 04 04:13:57 kubernetes-upgrade-326061 kubelet[2297]: E1004 04:13:57.788737    2297 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.58:8443: connect: connection refused" node="kubernetes-upgrade-326061"
	Oct 04 04:13:58 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:13:58.590540    2297 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-326061"
	Oct 04 04:14:01 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:01.947421    2297 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-326061"
	Oct 04 04:14:01 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:01.947563    2297 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-326061"
	Oct 04 04:14:01 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:01.989866    2297 apiserver.go:52] "Watching apiserver"
	Oct 04 04:14:02 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:02.019877    2297 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 04 04:14:05 kubernetes-upgrade-326061 kubelet[2297]: W1004 04:14:05.299680    2297 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:kubernetes-upgrade-326061" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-326061' and this object
	Oct 04 04:14:05 kubernetes-upgrade-326061 kubelet[2297]: E1004 04:14:05.299760    2297 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:kubernetes-upgrade-326061\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-326061' and this object" logger="UnhandledError"
	Oct 04 04:14:05 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:05.395487    2297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3af2efab-2e7c-427a-8432-c291c3c6a220-tmp\") pod \"storage-provisioner\" (UID: \"3af2efab-2e7c-427a-8432-c291c3c6a220\") " pod="kube-system/storage-provisioner"
	Oct 04 04:14:05 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:05.395576    2297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54crt\" (UniqueName: \"kubernetes.io/projected/3af2efab-2e7c-427a-8432-c291c3c6a220-kube-api-access-54crt\") pod \"storage-provisioner\" (UID: \"3af2efab-2e7c-427a-8432-c291c3c6a220\") " pod="kube-system/storage-provisioner"
	Oct 04 04:14:06 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:06.100565    2297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f09e57d1-425d-4943-a3cc-ef940948a27c-lib-modules\") pod \"kube-proxy-7xqx2\" (UID: \"f09e57d1-425d-4943-a3cc-ef940948a27c\") " pod="kube-system/kube-proxy-7xqx2"
	Oct 04 04:14:06 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:06.100826    2297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f09e57d1-425d-4943-a3cc-ef940948a27c-kube-proxy\") pod \"kube-proxy-7xqx2\" (UID: \"f09e57d1-425d-4943-a3cc-ef940948a27c\") " pod="kube-system/kube-proxy-7xqx2"
	Oct 04 04:14:06 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:06.100985    2297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f09e57d1-425d-4943-a3cc-ef940948a27c-xtables-lock\") pod \"kube-proxy-7xqx2\" (UID: \"f09e57d1-425d-4943-a3cc-ef940948a27c\") " pod="kube-system/kube-proxy-7xqx2"
	Oct 04 04:14:06 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:06.101157    2297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n44qg\" (UniqueName: \"kubernetes.io/projected/f09e57d1-425d-4943-a3cc-ef940948a27c-kube-api-access-n44qg\") pod \"kube-proxy-7xqx2\" (UID: \"f09e57d1-425d-4943-a3cc-ef940948a27c\") " pod="kube-system/kube-proxy-7xqx2"
	Oct 04 04:14:06 kubernetes-upgrade-326061 kubelet[2297]: W1004 04:14:06.185477    2297 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:kubernetes-upgrade-326061" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-326061' and this object
	Oct 04 04:14:06 kubernetes-upgrade-326061 kubelet[2297]: E1004 04:14:06.185651    2297 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:kubernetes-upgrade-326061\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-326061' and this object" logger="UnhandledError"
	Oct 04 04:14:06 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:06.302078    2297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7be7498-2f95-4eea-a730-d4864d5bd495-config-volume\") pod \"coredns-7c65d6cfc9-97j6m\" (UID: \"b7be7498-2f95-4eea-a730-d4864d5bd495\") " pod="kube-system/coredns-7c65d6cfc9-97j6m"
	Oct 04 04:14:06 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:06.302162    2297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgcnd\" (UniqueName: \"kubernetes.io/projected/b7be7498-2f95-4eea-a730-d4864d5bd495-kube-api-access-xgcnd\") pod \"coredns-7c65d6cfc9-97j6m\" (UID: \"b7be7498-2f95-4eea-a730-d4864d5bd495\") " pod="kube-system/coredns-7c65d6cfc9-97j6m"
	Oct 04 04:14:06 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:06.302204    2297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c857c9c-5c9d-410b-8aaf-871b73aa6019-config-volume\") pod \"coredns-7c65d6cfc9-6x9ls\" (UID: \"4c857c9c-5c9d-410b-8aaf-871b73aa6019\") " pod="kube-system/coredns-7c65d6cfc9-6x9ls"
	Oct 04 04:14:06 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:06.302228    2297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp2n6\" (UniqueName: \"kubernetes.io/projected/4c857c9c-5c9d-410b-8aaf-871b73aa6019-kube-api-access-vp2n6\") pod \"coredns-7c65d6cfc9-6x9ls\" (UID: \"4c857c9c-5c9d-410b-8aaf-871b73aa6019\") " pod="kube-system/coredns-7c65d6cfc9-6x9ls"
	Oct 04 04:14:06 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:06.363257    2297 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 04 04:14:07 kubernetes-upgrade-326061 kubelet[2297]: E1004 04:14:07.099107    2297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015247098426021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:14:07 kubernetes-upgrade-326061 kubelet[2297]: E1004 04:14:07.099138    2297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015247098426021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:14:07 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:07.661556    2297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7xqx2" podStartSLOduration=2.661518072 podStartE2EDuration="2.661518072s" podCreationTimestamp="2024-10-04 04:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-04 04:14:07.656503958 +0000 UTC m=+10.770872488" watchObservedRunningTime="2024-10-04 04:14:07.661518072 +0000 UTC m=+10.775886603"
	Oct 04 04:14:07 kubernetes-upgrade-326061 kubelet[2297]: I1004 04:14:07.809369    2297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=20.809343891 podStartE2EDuration="20.809343891s" podCreationTimestamp="2024-10-04 04:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-04 04:14:07.805484723 +0000 UTC m=+10.919853253" watchObservedRunningTime="2024-10-04 04:14:07.809343891 +0000 UTC m=+10.923712422"
	
	
	==> storage-provisioner [8759417fe0284aeb7a48f0960ac806f26adbce0a0e29a128fd37186acfee44eb] <==
	I1004 04:14:06.812881       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-326061 -n kubernetes-upgrade-326061
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-326061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-326061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-326061
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-326061: (1.166414665s)
--- FAIL: TestKubernetesUpgrade (391.47s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (65.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-353264 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-353264 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.335135796s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-353264] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-353264" primary control-plane node in "pause-353264" cluster
	* Updating the running kvm2 "pause-353264" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-353264" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 04:11:13.511890   59356 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:11:13.512058   59356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:11:13.512069   59356 out.go:358] Setting ErrFile to fd 2...
	I1004 04:11:13.512075   59356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:11:13.512403   59356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:11:13.513114   59356 out.go:352] Setting JSON to false
	I1004 04:11:13.514117   59356 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6818,"bootTime":1728008255,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:11:13.514227   59356 start.go:139] virtualization: kvm guest
	I1004 04:11:13.516395   59356 out.go:177] * [pause-353264] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:11:13.518275   59356 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:11:13.518282   59356 notify.go:220] Checking for updates...
	I1004 04:11:13.520990   59356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:11:13.522598   59356 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:11:13.524054   59356 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:11:13.525573   59356 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:11:13.526888   59356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:11:13.528972   59356 config.go:182] Loaded profile config "pause-353264": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:11:13.529652   59356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:11:13.529748   59356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:11:13.546427   59356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I1004 04:11:13.547039   59356 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:11:13.547633   59356 main.go:141] libmachine: Using API Version  1
	I1004 04:11:13.547656   59356 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:11:13.548066   59356 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:11:13.548460   59356 main.go:141] libmachine: (pause-353264) Calling .DriverName
	I1004 04:11:13.548806   59356 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:11:13.549252   59356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:11:13.549304   59356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:11:13.564441   59356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39811
	I1004 04:11:13.564931   59356 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:11:13.565404   59356 main.go:141] libmachine: Using API Version  1
	I1004 04:11:13.565430   59356 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:11:13.565764   59356 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:11:13.565994   59356 main.go:141] libmachine: (pause-353264) Calling .DriverName
	I1004 04:11:13.606049   59356 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 04:11:13.607403   59356 start.go:297] selected driver: kvm2
	I1004 04:11:13.607426   59356 start.go:901] validating driver "kvm2" against &{Name:pause-353264 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:pause-353264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:11:13.607652   59356 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:11:13.608168   59356 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:11:13.608293   59356 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:11:13.625885   59356 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:11:13.626656   59356 cni.go:84] Creating CNI manager for ""
	I1004 04:11:13.626716   59356 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:11:13.626789   59356 start.go:340] cluster config:
	{Name:pause-353264 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-353264 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-ali
ases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:11:13.626936   59356 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:11:13.628783   59356 out.go:177] * Starting "pause-353264" primary control-plane node in "pause-353264" cluster
	I1004 04:11:13.630057   59356 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:11:13.630100   59356 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 04:11:13.630113   59356 cache.go:56] Caching tarball of preloaded images
	I1004 04:11:13.630207   59356 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:11:13.630219   59356 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 04:11:13.630355   59356 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/pause-353264/config.json ...
	I1004 04:11:13.630586   59356 start.go:360] acquireMachinesLock for pause-353264: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:11:26.456708   59356 start.go:364] duration metric: took 12.82606879s to acquireMachinesLock for "pause-353264"
	I1004 04:11:26.456770   59356 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:11:26.456782   59356 fix.go:54] fixHost starting: 
	I1004 04:11:26.457266   59356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:11:26.457322   59356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:11:26.474242   59356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
	I1004 04:11:26.474733   59356 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:11:26.475388   59356 main.go:141] libmachine: Using API Version  1
	I1004 04:11:26.475414   59356 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:11:26.475763   59356 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:11:26.475975   59356 main.go:141] libmachine: (pause-353264) Calling .DriverName
	I1004 04:11:26.476137   59356 main.go:141] libmachine: (pause-353264) Calling .GetState
	I1004 04:11:26.477975   59356 fix.go:112] recreateIfNeeded on pause-353264: state=Running err=<nil>
	W1004 04:11:26.477995   59356 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:11:26.479898   59356 out.go:177] * Updating the running kvm2 "pause-353264" VM ...
	I1004 04:11:26.481566   59356 machine.go:93] provisionDockerMachine start ...
	I1004 04:11:26.481592   59356 main.go:141] libmachine: (pause-353264) Calling .DriverName
	I1004 04:11:26.481897   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHHostname
	I1004 04:11:26.484995   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:26.485422   59356 main.go:141] libmachine: (pause-353264) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ea:c1", ip: ""} in network mk-pause-353264: {Iface:virbr4 ExpiryTime:2024-10-04 05:10:32 +0000 UTC Type:0 Mac:52:54:00:82:ea:c1 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:pause-353264 Clientid:01:52:54:00:82:ea:c1}
	I1004 04:11:26.485457   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined IP address 192.168.39.41 and MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:26.485637   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHPort
	I1004 04:11:26.485861   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHKeyPath
	I1004 04:11:26.486042   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHKeyPath
	I1004 04:11:26.486206   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHUsername
	I1004 04:11:26.486389   59356 main.go:141] libmachine: Using SSH client type: native
	I1004 04:11:26.486659   59356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1004 04:11:26.486677   59356 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:11:26.596807   59356 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-353264
	
	I1004 04:11:26.596834   59356 main.go:141] libmachine: (pause-353264) Calling .GetMachineName
	I1004 04:11:26.597050   59356 buildroot.go:166] provisioning hostname "pause-353264"
	I1004 04:11:26.597081   59356 main.go:141] libmachine: (pause-353264) Calling .GetMachineName
	I1004 04:11:26.597327   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHHostname
	I1004 04:11:26.600385   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:26.600761   59356 main.go:141] libmachine: (pause-353264) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ea:c1", ip: ""} in network mk-pause-353264: {Iface:virbr4 ExpiryTime:2024-10-04 05:10:32 +0000 UTC Type:0 Mac:52:54:00:82:ea:c1 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:pause-353264 Clientid:01:52:54:00:82:ea:c1}
	I1004 04:11:26.600799   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined IP address 192.168.39.41 and MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:26.600917   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHPort
	I1004 04:11:26.601121   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHKeyPath
	I1004 04:11:26.601261   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHKeyPath
	I1004 04:11:26.601423   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHUsername
	I1004 04:11:26.601586   59356 main.go:141] libmachine: Using SSH client type: native
	I1004 04:11:26.601795   59356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1004 04:11:26.601808   59356 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-353264 && echo "pause-353264" | sudo tee /etc/hostname
	I1004 04:11:26.725563   59356 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-353264
	
	I1004 04:11:26.725598   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHHostname
	I1004 04:11:26.728490   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:26.728808   59356 main.go:141] libmachine: (pause-353264) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ea:c1", ip: ""} in network mk-pause-353264: {Iface:virbr4 ExpiryTime:2024-10-04 05:10:32 +0000 UTC Type:0 Mac:52:54:00:82:ea:c1 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:pause-353264 Clientid:01:52:54:00:82:ea:c1}
	I1004 04:11:26.728852   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined IP address 192.168.39.41 and MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:26.728968   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHPort
	I1004 04:11:26.729201   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHKeyPath
	I1004 04:11:26.729358   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHKeyPath
	I1004 04:11:26.729512   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHUsername
	I1004 04:11:26.729667   59356 main.go:141] libmachine: Using SSH client type: native
	I1004 04:11:26.729841   59356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1004 04:11:26.729857   59356 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-353264' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-353264/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-353264' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:11:26.833094   59356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:11:26.833142   59356 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:11:26.833174   59356 buildroot.go:174] setting up certificates
	I1004 04:11:26.833186   59356 provision.go:84] configureAuth start
	I1004 04:11:26.833195   59356 main.go:141] libmachine: (pause-353264) Calling .GetMachineName
	I1004 04:11:26.833512   59356 main.go:141] libmachine: (pause-353264) Calling .GetIP
	I1004 04:11:26.836363   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:26.836700   59356 main.go:141] libmachine: (pause-353264) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ea:c1", ip: ""} in network mk-pause-353264: {Iface:virbr4 ExpiryTime:2024-10-04 05:10:32 +0000 UTC Type:0 Mac:52:54:00:82:ea:c1 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:pause-353264 Clientid:01:52:54:00:82:ea:c1}
	I1004 04:11:26.836727   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined IP address 192.168.39.41 and MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:26.836840   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHHostname
	I1004 04:11:26.839051   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:26.839338   59356 main.go:141] libmachine: (pause-353264) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ea:c1", ip: ""} in network mk-pause-353264: {Iface:virbr4 ExpiryTime:2024-10-04 05:10:32 +0000 UTC Type:0 Mac:52:54:00:82:ea:c1 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:pause-353264 Clientid:01:52:54:00:82:ea:c1}
	I1004 04:11:26.839378   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined IP address 192.168.39.41 and MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:26.839494   59356 provision.go:143] copyHostCerts
	I1004 04:11:26.839565   59356 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:11:26.839579   59356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:11:26.839630   59356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:11:26.839734   59356 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:11:26.839742   59356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:11:26.839761   59356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:11:26.839849   59356 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:11:26.839858   59356 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:11:26.839878   59356 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:11:26.839952   59356 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.pause-353264 san=[127.0.0.1 192.168.39.41 localhost minikube pause-353264]
	I1004 04:11:27.255594   59356 provision.go:177] copyRemoteCerts
	I1004 04:11:27.255650   59356 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:11:27.255669   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHHostname
	I1004 04:11:27.258463   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:27.258895   59356 main.go:141] libmachine: (pause-353264) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ea:c1", ip: ""} in network mk-pause-353264: {Iface:virbr4 ExpiryTime:2024-10-04 05:10:32 +0000 UTC Type:0 Mac:52:54:00:82:ea:c1 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:pause-353264 Clientid:01:52:54:00:82:ea:c1}
	I1004 04:11:27.258925   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined IP address 192.168.39.41 and MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:27.259144   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHPort
	I1004 04:11:27.259359   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHKeyPath
	I1004 04:11:27.259548   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHUsername
	I1004 04:11:27.259676   59356 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/pause-353264/id_rsa Username:docker}
	I1004 04:11:27.342787   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:11:27.375996   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1004 04:11:27.409765   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:11:27.438564   59356 provision.go:87] duration metric: took 605.367804ms to configureAuth
	I1004 04:11:27.438598   59356 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:11:27.438796   59356 config.go:182] Loaded profile config "pause-353264": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:11:27.438870   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHHostname
	I1004 04:11:27.441730   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:27.442115   59356 main.go:141] libmachine: (pause-353264) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ea:c1", ip: ""} in network mk-pause-353264: {Iface:virbr4 ExpiryTime:2024-10-04 05:10:32 +0000 UTC Type:0 Mac:52:54:00:82:ea:c1 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:pause-353264 Clientid:01:52:54:00:82:ea:c1}
	I1004 04:11:27.442139   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined IP address 192.168.39.41 and MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:27.442328   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHPort
	I1004 04:11:27.442525   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHKeyPath
	I1004 04:11:27.442677   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHKeyPath
	I1004 04:11:27.442790   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHUsername
	I1004 04:11:27.442919   59356 main.go:141] libmachine: Using SSH client type: native
	I1004 04:11:27.443088   59356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1004 04:11:27.443103   59356 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:11:33.018900   59356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:11:33.018929   59356 machine.go:96] duration metric: took 6.537345022s to provisionDockerMachine
	I1004 04:11:33.018947   59356 start.go:293] postStartSetup for "pause-353264" (driver="kvm2")
	I1004 04:11:33.018956   59356 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:11:33.018973   59356 main.go:141] libmachine: (pause-353264) Calling .DriverName
	I1004 04:11:33.019377   59356 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:11:33.019410   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHHostname
	I1004 04:11:33.022220   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:33.022694   59356 main.go:141] libmachine: (pause-353264) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ea:c1", ip: ""} in network mk-pause-353264: {Iface:virbr4 ExpiryTime:2024-10-04 05:10:32 +0000 UTC Type:0 Mac:52:54:00:82:ea:c1 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:pause-353264 Clientid:01:52:54:00:82:ea:c1}
	I1004 04:11:33.022727   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined IP address 192.168.39.41 and MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:33.022901   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHPort
	I1004 04:11:33.023095   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHKeyPath
	I1004 04:11:33.023248   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHUsername
	I1004 04:11:33.023373   59356 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/pause-353264/id_rsa Username:docker}
	I1004 04:11:33.108242   59356 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:11:33.114852   59356 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:11:33.114892   59356 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:11:33.114957   59356 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:11:33.115054   59356 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:11:33.115147   59356 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:11:33.129400   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:11:33.160004   59356 start.go:296] duration metric: took 141.044701ms for postStartSetup
	I1004 04:11:33.160050   59356 fix.go:56] duration metric: took 6.703268825s for fixHost
	I1004 04:11:33.160139   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHHostname
	I1004 04:11:33.162870   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:33.163490   59356 main.go:141] libmachine: (pause-353264) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ea:c1", ip: ""} in network mk-pause-353264: {Iface:virbr4 ExpiryTime:2024-10-04 05:10:32 +0000 UTC Type:0 Mac:52:54:00:82:ea:c1 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:pause-353264 Clientid:01:52:54:00:82:ea:c1}
	I1004 04:11:33.163523   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined IP address 192.168.39.41 and MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:33.163738   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHPort
	I1004 04:11:33.163957   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHKeyPath
	I1004 04:11:33.164120   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHKeyPath
	I1004 04:11:33.164245   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHUsername
	I1004 04:11:33.164432   59356 main.go:141] libmachine: Using SSH client type: native
	I1004 04:11:33.164638   59356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1004 04:11:33.164649   59356 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:11:33.277121   59356 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015093.267555261
	
	I1004 04:11:33.277142   59356 fix.go:216] guest clock: 1728015093.267555261
	I1004 04:11:33.277152   59356 fix.go:229] Guest: 2024-10-04 04:11:33.267555261 +0000 UTC Remote: 2024-10-04 04:11:33.160055666 +0000 UTC m=+19.688711439 (delta=107.499595ms)
	I1004 04:11:33.277200   59356 fix.go:200] guest clock delta is within tolerance: 107.499595ms
	I1004 04:11:33.277205   59356 start.go:83] releasing machines lock for "pause-353264", held for 6.820464679s
	I1004 04:11:33.277226   59356 main.go:141] libmachine: (pause-353264) Calling .DriverName
	I1004 04:11:33.277474   59356 main.go:141] libmachine: (pause-353264) Calling .GetIP
	I1004 04:11:33.280314   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:33.280742   59356 main.go:141] libmachine: (pause-353264) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ea:c1", ip: ""} in network mk-pause-353264: {Iface:virbr4 ExpiryTime:2024-10-04 05:10:32 +0000 UTC Type:0 Mac:52:54:00:82:ea:c1 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:pause-353264 Clientid:01:52:54:00:82:ea:c1}
	I1004 04:11:33.280779   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined IP address 192.168.39.41 and MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:33.280971   59356 main.go:141] libmachine: (pause-353264) Calling .DriverName
	I1004 04:11:33.281579   59356 main.go:141] libmachine: (pause-353264) Calling .DriverName
	I1004 04:11:33.281731   59356 main.go:141] libmachine: (pause-353264) Calling .DriverName
	I1004 04:11:33.281815   59356 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:11:33.281857   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHHostname
	I1004 04:11:33.281904   59356 ssh_runner.go:195] Run: cat /version.json
	I1004 04:11:33.281920   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHHostname
	I1004 04:11:33.284740   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:33.284789   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:33.285112   59356 main.go:141] libmachine: (pause-353264) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ea:c1", ip: ""} in network mk-pause-353264: {Iface:virbr4 ExpiryTime:2024-10-04 05:10:32 +0000 UTC Type:0 Mac:52:54:00:82:ea:c1 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:pause-353264 Clientid:01:52:54:00:82:ea:c1}
	I1004 04:11:33.285135   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined IP address 192.168.39.41 and MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:33.285225   59356 main.go:141] libmachine: (pause-353264) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ea:c1", ip: ""} in network mk-pause-353264: {Iface:virbr4 ExpiryTime:2024-10-04 05:10:32 +0000 UTC Type:0 Mac:52:54:00:82:ea:c1 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:pause-353264 Clientid:01:52:54:00:82:ea:c1}
	I1004 04:11:33.285247   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined IP address 192.168.39.41 and MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:33.285418   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHPort
	I1004 04:11:33.285514   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHPort
	I1004 04:11:33.285592   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHKeyPath
	I1004 04:11:33.285622   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHKeyPath
	I1004 04:11:33.285789   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHUsername
	I1004 04:11:33.285790   59356 main.go:141] libmachine: (pause-353264) Calling .GetSSHUsername
	I1004 04:11:33.285940   59356 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/pause-353264/id_rsa Username:docker}
	I1004 04:11:33.286027   59356 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/pause-353264/id_rsa Username:docker}
	I1004 04:11:33.362366   59356 ssh_runner.go:195] Run: systemctl --version
	I1004 04:11:33.389986   59356 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:11:33.571050   59356 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:11:33.577833   59356 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:11:33.577893   59356 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:11:33.588031   59356 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1004 04:11:33.588057   59356 start.go:495] detecting cgroup driver to use...
	I1004 04:11:33.588124   59356 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:11:33.607270   59356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:11:33.626569   59356 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:11:33.626694   59356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:11:33.646257   59356 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:11:33.663740   59356 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:11:33.806596   59356 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:11:33.962021   59356 docker.go:233] disabling docker service ...
	I1004 04:11:33.962100   59356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:11:33.980904   59356 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:11:33.998550   59356 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:11:34.147869   59356 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:11:34.305522   59356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:11:34.321392   59356 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:11:34.344304   59356 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:11:34.344373   59356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:11:34.360572   59356 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:11:34.360644   59356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:11:34.375873   59356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:11:34.390719   59356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:11:34.404970   59356 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:11:34.420103   59356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:11:34.437039   59356 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:11:34.452125   59356 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:11:34.468067   59356 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:11:34.479597   59356 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:11:34.489898   59356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:11:34.631620   59356 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:11:40.272786   59356 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.641126618s)
	I1004 04:11:40.272817   59356 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:11:40.272876   59356 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:11:40.277971   59356 start.go:563] Will wait 60s for crictl version
	I1004 04:11:40.278033   59356 ssh_runner.go:195] Run: which crictl
	I1004 04:11:40.282364   59356 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:11:40.320766   59356 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:11:40.320855   59356 ssh_runner.go:195] Run: crio --version
	I1004 04:11:40.353949   59356 ssh_runner.go:195] Run: crio --version
	I1004 04:11:40.389086   59356 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:11:40.390552   59356 main.go:141] libmachine: (pause-353264) Calling .GetIP
	I1004 04:11:40.393783   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:40.394146   59356 main.go:141] libmachine: (pause-353264) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:ea:c1", ip: ""} in network mk-pause-353264: {Iface:virbr4 ExpiryTime:2024-10-04 05:10:32 +0000 UTC Type:0 Mac:52:54:00:82:ea:c1 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:pause-353264 Clientid:01:52:54:00:82:ea:c1}
	I1004 04:11:40.394177   59356 main.go:141] libmachine: (pause-353264) DBG | domain pause-353264 has defined IP address 192.168.39.41 and MAC address 52:54:00:82:ea:c1 in network mk-pause-353264
	I1004 04:11:40.394449   59356 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 04:11:40.399045   59356 kubeadm.go:883] updating cluster {Name:pause-353264 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:pause-353264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-se
curity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:11:40.399180   59356 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:11:40.399255   59356 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:11:40.451367   59356 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:11:40.451390   59356 crio.go:433] Images already preloaded, skipping extraction
	I1004 04:11:40.451443   59356 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:11:40.493931   59356 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:11:40.493957   59356 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:11:40.493966   59356 kubeadm.go:934] updating node { 192.168.39.41 8443 v1.31.1 crio true true} ...
	I1004 04:11:40.494097   59356 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-353264 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-353264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:11:40.494176   59356 ssh_runner.go:195] Run: crio config
	I1004 04:11:40.545597   59356 cni.go:84] Creating CNI manager for ""
	I1004 04:11:40.545620   59356 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:11:40.545630   59356 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:11:40.545656   59356 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-353264 NodeName:pause-353264 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:11:40.545782   59356 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-353264"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:11:40.545841   59356 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:11:40.556686   59356 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:11:40.556766   59356 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:11:40.566691   59356 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1004 04:11:40.586097   59356 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:11:40.605998   59356 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1004 04:11:40.625388   59356 ssh_runner.go:195] Run: grep 192.168.39.41	control-plane.minikube.internal$ /etc/hosts
	I1004 04:11:40.629635   59356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:11:40.762310   59356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:11:40.782016   59356 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/pause-353264 for IP: 192.168.39.41
	I1004 04:11:40.782041   59356 certs.go:194] generating shared ca certs ...
	I1004 04:11:40.782065   59356 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:11:40.782252   59356 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:11:40.782317   59356 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:11:40.782332   59356 certs.go:256] generating profile certs ...
	I1004 04:11:40.782436   59356 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/pause-353264/client.key
	I1004 04:11:40.782514   59356 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/pause-353264/apiserver.key.a702fa37
	I1004 04:11:40.782575   59356 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/pause-353264/proxy-client.key
	I1004 04:11:40.782725   59356 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:11:40.782775   59356 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:11:40.782789   59356 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:11:40.782823   59356 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:11:40.782857   59356 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:11:40.782888   59356 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:11:40.782944   59356 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:11:40.783582   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:11:40.812757   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:11:40.843642   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:11:40.872537   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:11:40.900601   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/pause-353264/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1004 04:11:40.927839   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/pause-353264/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:11:40.960948   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/pause-353264/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:11:40.989773   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/pause-353264/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 04:11:41.017653   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:11:41.044545   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:11:41.074151   59356 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:11:41.101587   59356 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:11:41.120315   59356 ssh_runner.go:195] Run: openssl version
	I1004 04:11:41.126476   59356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:11:41.138129   59356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:11:41.142970   59356 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:11:41.143023   59356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:11:41.149128   59356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:11:41.159866   59356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:11:41.171948   59356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:11:41.176708   59356 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:11:41.176779   59356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:11:41.183446   59356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:11:41.193977   59356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:11:41.205521   59356 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:11:41.210498   59356 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:11:41.210557   59356 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:11:41.216527   59356 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:11:41.274358   59356 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:11:41.301339   59356 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:11:41.328682   59356 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:11:41.385422   59356 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:11:41.459533   59356 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:11:41.496595   59356 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:11:41.538361   59356 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:11:41.565028   59356 kubeadm.go:392] StartCluster: {Name:pause-353264 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:pause-353264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secur
ity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:11:41.565189   59356 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:11:41.565266   59356 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:11:41.867357   59356 cri.go:89] found id: "9aff26b1ec7bb1ec30ebf45e6a1956367f821edea625ad7dcc9b2471118c1503"
	I1004 04:11:41.867382   59356 cri.go:89] found id: "36279bf5603bc0e718742b055d454240e5aac0045b783d2c0bda216288633d35"
	I1004 04:11:41.867386   59356 cri.go:89] found id: "7934b01963c4be725cd65138a3481e7a73e7902fe8f0e73707284f44de4f47ef"
	I1004 04:11:41.867390   59356 cri.go:89] found id: "dca00a9c6d44e15b6fd112c388a715d1ba7ec29eb6d9bf7442109db7344a9a29"
	I1004 04:11:41.867393   59356 cri.go:89] found id: "346886f5f536df47a082d4d2dd7efdffaf14f1bb91e0952c78479bfd5390a504"
	I1004 04:11:41.867396   59356 cri.go:89] found id: "d646c22ba6b96f258350a6838a3660ce8dde1e00b867d2d12acd47a6df76efa7"
	I1004 04:11:41.867398   59356 cri.go:89] found id: "b416de01dcbc1ba64d38f526c549c66995ceb518840823815841bf4ac1e79799"
	I1004 04:11:41.867401   59356 cri.go:89] found id: ""
	I1004 04:11:41.867448   59356 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-353264 -n pause-353264
E1004 04:12:15.014544   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-353264 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-353264 logs -n 25: (1.408107264s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-204413 sudo                  | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo                  | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo cat              | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo cat              | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo                  | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo                  | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo                  | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo find             | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo crio             | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-204413                       | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC | 04 Oct 24 04:07 UTC |
	| start   | -p force-systemd-flag-519066           | force-systemd-flag-519066 | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC | 04 Oct 24 04:09 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p offline-crio-329336                 | offline-crio-329336       | jenkins | v1.34.0 | 04 Oct 24 04:08 UTC | 04 Oct 24 04:08 UTC |
	| start   | -p cert-expiration-363290              | cert-expiration-363290    | jenkins | v1.34.0 | 04 Oct 24 04:08 UTC | 04 Oct 24 04:10 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-391967            | force-systemd-env-391967  | jenkins | v1.34.0 | 04 Oct 24 04:09 UTC | 04 Oct 24 04:09 UTC |
	| start   | -p cert-options-756541                 | cert-options-756541       | jenkins | v1.34.0 | 04 Oct 24 04:09 UTC | 04 Oct 24 04:10 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-519066 ssh cat      | force-systemd-flag-519066 | jenkins | v1.34.0 | 04 Oct 24 04:09 UTC | 04 Oct 24 04:09 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-519066           | force-systemd-flag-519066 | jenkins | v1.34.0 | 04 Oct 24 04:09 UTC | 04 Oct 24 04:09 UTC |
	| start   | -p pause-353264 --memory=2048          | pause-353264              | jenkins | v1.34.0 | 04 Oct 24 04:09 UTC | 04 Oct 24 04:11 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-756541 ssh                | cert-options-756541       | jenkins | v1.34.0 | 04 Oct 24 04:10 UTC | 04 Oct 24 04:10 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-756541 -- sudo         | cert-options-756541       | jenkins | v1.34.0 | 04 Oct 24 04:10 UTC | 04 Oct 24 04:10 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-756541                 | cert-options-756541       | jenkins | v1.34.0 | 04 Oct 24 04:10 UTC | 04 Oct 24 04:10 UTC |
	| start   | -p stopped-upgrade-389737              | minikube                  | jenkins | v1.26.0 | 04 Oct 24 04:10 UTC | 04 Oct 24 04:11 UTC |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-353264                        | pause-353264              | jenkins | v1.34.0 | 04 Oct 24 04:11 UTC | 04 Oct 24 04:12 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-389737 stop            | minikube                  | jenkins | v1.26.0 | 04 Oct 24 04:11 UTC | 04 Oct 24 04:11 UTC |
	| start   | -p stopped-upgrade-389737              | stopped-upgrade-389737    | jenkins | v1.34.0 | 04 Oct 24 04:11 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 04:11:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 04:11:54.923086   59686 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:11:54.923193   59686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:11:54.923202   59686 out.go:358] Setting ErrFile to fd 2...
	I1004 04:11:54.923206   59686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:11:54.923386   59686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:11:54.923916   59686 out.go:352] Setting JSON to false
	I1004 04:11:54.924792   59686 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6860,"bootTime":1728008255,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:11:54.924884   59686 start.go:139] virtualization: kvm guest
	I1004 04:11:54.926766   59686 out.go:177] * [stopped-upgrade-389737] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:11:54.927932   59686 notify.go:220] Checking for updates...
	I1004 04:11:54.927935   59686 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:11:54.929060   59686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:11:54.930375   59686 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:11:54.931581   59686 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:11:54.932704   59686 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:11:54.933823   59686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:11:54.935464   59686 config.go:182] Loaded profile config "stopped-upgrade-389737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1004 04:11:54.936049   59686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:11:54.936104   59686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:11:54.950715   59686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37825
	I1004 04:11:54.951241   59686 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:11:54.951753   59686 main.go:141] libmachine: Using API Version  1
	I1004 04:11:54.951773   59686 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:11:54.952098   59686 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:11:54.952267   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	I1004 04:11:54.954009   59686 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1004 04:11:54.955156   59686 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:11:54.955496   59686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:11:54.955532   59686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:11:54.970319   59686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36407
	I1004 04:11:54.970875   59686 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:11:54.971339   59686 main.go:141] libmachine: Using API Version  1
	I1004 04:11:54.971363   59686 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:11:54.971697   59686 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:11:54.971904   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	I1004 04:11:55.008627   59686 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 04:11:55.010513   59686 start.go:297] selected driver: kvm2
	I1004 04:11:55.010532   59686 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-389737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-389
737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.179 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1004 04:11:55.010665   59686 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:11:55.011534   59686 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:11:55.011624   59686 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:11:55.027285   59686 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:11:55.027843   59686 cni.go:84] Creating CNI manager for ""
	I1004 04:11:55.027917   59686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:11:55.028031   59686 start.go:340] cluster config:
	{Name:stopped-upgrade-389737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-389737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.179 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1004 04:11:55.028187   59686 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:11:55.030023   59686 out.go:177] * Starting "stopped-upgrade-389737" primary control-plane node in "stopped-upgrade-389737" cluster
	I1004 04:11:55.031443   59686 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I1004 04:11:55.031485   59686 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I1004 04:11:55.031496   59686 cache.go:56] Caching tarball of preloaded images
	I1004 04:11:55.031575   59686 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:11:55.031586   59686 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I1004 04:11:55.031674   59686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/stopped-upgrade-389737/config.json ...
	I1004 04:11:55.031892   59686 start.go:360] acquireMachinesLock for stopped-upgrade-389737: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:11:55.031951   59686 start.go:364] duration metric: took 32.807µs to acquireMachinesLock for "stopped-upgrade-389737"
	I1004 04:11:55.031965   59686 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:11:55.031972   59686 fix.go:54] fixHost starting: 
	I1004 04:11:55.032373   59686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:11:55.032417   59686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:11:55.047497   59686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I1004 04:11:55.048031   59686 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:11:55.048736   59686 main.go:141] libmachine: Using API Version  1
	I1004 04:11:55.048782   59686 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:11:55.049134   59686 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:11:55.049372   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	I1004 04:11:55.049557   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetState
	I1004 04:11:55.051769   59686 fix.go:112] recreateIfNeeded on stopped-upgrade-389737: state=Stopped err=<nil>
	I1004 04:11:55.051842   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	W1004 04:11:55.052020   59686 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:11:55.054019   59686 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-389737" ...
	I1004 04:11:54.077902   59356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:11:54.311139   59356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:11:54.381749   59356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:11:54.452210   59356 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:11:54.452297   59356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:11:54.952940   59356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:11:55.453017   59356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:11:55.469251   59356 api_server.go:72] duration metric: took 1.017038288s to wait for apiserver process to appear ...
	I1004 04:11:55.469282   59356 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:11:55.469305   59356 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1004 04:11:57.854910   59356 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:11:57.854940   59356 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:11:57.854957   59356 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1004 04:11:57.965567   59356 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:11:57.965610   59356 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:11:57.969831   59356 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1004 04:11:57.977511   59356 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:11:57.977537   59356 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:11:58.470331   59356 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1004 04:11:58.474498   59356 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:11:58.474522   59356 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:11:58.970314   59356 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1004 04:11:58.978942   59356 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:11:58.978975   59356 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:11:59.469572   59356 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1004 04:11:59.474157   59356 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1004 04:11:59.481340   59356 api_server.go:141] control plane version: v1.31.1
	I1004 04:11:59.481369   59356 api_server.go:131] duration metric: took 4.012080581s to wait for apiserver health ...
	I1004 04:11:59.481377   59356 cni.go:84] Creating CNI manager for ""
	I1004 04:11:59.481383   59356 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:11:59.483711   59356 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:11:55.055399   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .Start
	I1004 04:11:55.055620   59686 main.go:141] libmachine: (stopped-upgrade-389737) Ensuring networks are active...
	I1004 04:11:55.056531   59686 main.go:141] libmachine: (stopped-upgrade-389737) Ensuring network default is active
	I1004 04:11:55.056978   59686 main.go:141] libmachine: (stopped-upgrade-389737) Ensuring network mk-stopped-upgrade-389737 is active
	I1004 04:11:55.057483   59686 main.go:141] libmachine: (stopped-upgrade-389737) Getting domain xml...
	I1004 04:11:55.058486   59686 main.go:141] libmachine: (stopped-upgrade-389737) Creating domain...
	I1004 04:11:56.303929   59686 main.go:141] libmachine: (stopped-upgrade-389737) Waiting to get IP...
	I1004 04:11:56.304976   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:56.305354   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:56.305426   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:56.305336   59722 retry.go:31] will retry after 238.462034ms: waiting for machine to come up
	I1004 04:11:56.545960   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:56.546468   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:56.546493   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:56.546419   59722 retry.go:31] will retry after 350.897629ms: waiting for machine to come up
	I1004 04:11:56.899102   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:56.899616   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:56.899645   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:56.899566   59722 retry.go:31] will retry after 447.479738ms: waiting for machine to come up
	I1004 04:11:57.348152   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:57.348661   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:57.348688   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:57.348615   59722 retry.go:31] will retry after 369.223931ms: waiting for machine to come up
	I1004 04:11:57.719177   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:57.719737   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:57.719769   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:57.719678   59722 retry.go:31] will retry after 602.656032ms: waiting for machine to come up
	I1004 04:11:58.323435   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:58.323898   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:58.323954   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:58.323881   59722 retry.go:31] will retry after 815.829727ms: waiting for machine to come up
	I1004 04:11:59.140901   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:59.141401   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:59.141427   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:59.141367   59722 retry.go:31] will retry after 796.999391ms: waiting for machine to come up
	I1004 04:11:59.485392   59356 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:11:59.496838   59356 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:11:59.517923   59356 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:11:59.518024   59356 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1004 04:11:59.518047   59356 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1004 04:11:59.526831   59356 system_pods.go:59] 6 kube-system pods found
	I1004 04:11:59.526861   59356 system_pods.go:61] "coredns-7c65d6cfc9-gttvn" [4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c] Running
	I1004 04:11:59.526870   59356 system_pods.go:61] "etcd-pause-353264" [834bfa30-5dd7-4d25-8331-2b9418027c01] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:11:59.526876   59356 system_pods.go:61] "kube-apiserver-pause-353264" [b0180bdc-2f96-4ea3-ada2-9e0f3251fbe5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:11:59.526884   59356 system_pods.go:61] "kube-controller-manager-pause-353264" [da561ad5-cbdc-4e0a-b1cf-a99ed2605d14] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:11:59.526889   59356 system_pods.go:61] "kube-proxy-tthhg" [5b374015-3f42-42d6-8357-e27efe1a939a] Running
	I1004 04:11:59.526894   59356 system_pods.go:61] "kube-scheduler-pause-353264" [9b44baad-ee16-42bf-aa51-0810167533ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:11:59.526909   59356 system_pods.go:74] duration metric: took 8.951419ms to wait for pod list to return data ...
	I1004 04:11:59.526917   59356 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:11:59.531204   59356 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:11:59.531236   59356 node_conditions.go:123] node cpu capacity is 2
	I1004 04:11:59.531253   59356 node_conditions.go:105] duration metric: took 4.331354ms to run NodePressure ...
	I1004 04:11:59.531280   59356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:11:59.802165   59356 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:11:59.808719   59356 kubeadm.go:739] kubelet initialised
	I1004 04:11:59.808754   59356 kubeadm.go:740] duration metric: took 6.543454ms waiting for restarted kubelet to initialise ...
	I1004 04:11:59.808766   59356 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:11:59.815284   59356 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-gttvn" in "kube-system" namespace to be "Ready" ...
	I1004 04:11:59.823313   59356 pod_ready.go:93] pod "coredns-7c65d6cfc9-gttvn" in "kube-system" namespace has status "Ready":"True"
	I1004 04:11:59.823337   59356 pod_ready.go:82] duration metric: took 8.022694ms for pod "coredns-7c65d6cfc9-gttvn" in "kube-system" namespace to be "Ready" ...
	I1004 04:11:59.823347   59356 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:01.834122   59356 pod_ready.go:103] pod "etcd-pause-353264" in "kube-system" namespace has status "Ready":"False"
	I1004 04:12:01.181603   54385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:12:01.181837   54385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:11:59.940831   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:59.941467   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:59.941543   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:59.941477   59722 retry.go:31] will retry after 1.067074037s: waiting for machine to come up
	I1004 04:12:01.010980   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:01.011558   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:12:01.011578   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:12:01.011487   59722 retry.go:31] will retry after 1.371155898s: waiting for machine to come up
	I1004 04:12:02.385128   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:02.385719   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:12:02.385745   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:12:02.385667   59722 retry.go:31] will retry after 2.308141043s: waiting for machine to come up
	I1004 04:12:04.697015   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:04.697476   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:12:04.697504   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:12:04.697414   59722 retry.go:31] will retry after 2.468063752s: waiting for machine to come up
	I1004 04:12:04.330017   59356 pod_ready.go:103] pod "etcd-pause-353264" in "kube-system" namespace has status "Ready":"False"
	I1004 04:12:06.331992   59356 pod_ready.go:103] pod "etcd-pause-353264" in "kube-system" namespace has status "Ready":"False"
	I1004 04:12:07.168887   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:07.169442   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:12:07.169490   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:12:07.169408   59722 retry.go:31] will retry after 2.477656955s: waiting for machine to come up
	I1004 04:12:09.649007   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:09.649553   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:12:09.649579   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:12:09.649500   59722 retry.go:31] will retry after 3.983855136s: waiting for machine to come up
	I1004 04:12:08.831026   59356 pod_ready.go:103] pod "etcd-pause-353264" in "kube-system" namespace has status "Ready":"False"
	I1004 04:12:10.330577   59356 pod_ready.go:93] pod "etcd-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:10.330602   59356 pod_ready.go:82] duration metric: took 10.50724891s for pod "etcd-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:10.330613   59356 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.336975   59356 pod_ready.go:93] pod "kube-apiserver-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:11.337003   59356 pod_ready.go:82] duration metric: took 1.006383121s for pod "kube-apiserver-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.337012   59356 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.342120   59356 pod_ready.go:93] pod "kube-controller-manager-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:11.342140   59356 pod_ready.go:82] duration metric: took 5.121943ms for pod "kube-controller-manager-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.342148   59356 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tthhg" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.348177   59356 pod_ready.go:93] pod "kube-proxy-tthhg" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:11.348203   59356 pod_ready.go:82] duration metric: took 6.04771ms for pod "kube-proxy-tthhg" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.348216   59356 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.353137   59356 pod_ready.go:93] pod "kube-scheduler-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:11.353156   59356 pod_ready.go:82] duration metric: took 4.933845ms for pod "kube-scheduler-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.353164   59356 pod_ready.go:39] duration metric: took 11.544386687s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:12:11.353180   59356 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:12:11.367452   59356 ops.go:34] apiserver oom_adj: -16
	I1004 04:12:11.367471   59356 kubeadm.go:597] duration metric: took 29.267075053s to restartPrimaryControlPlane
	I1004 04:12:11.367480   59356 kubeadm.go:394] duration metric: took 29.802462907s to StartCluster
	I1004 04:12:11.367495   59356 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:12:11.367564   59356 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:12:11.368491   59356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:12:11.368712   59356 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:12:11.368794   59356 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:12:11.368936   59356 config.go:182] Loaded profile config "pause-353264": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:12:11.370428   59356 out.go:177] * Enabled addons: 
	I1004 04:12:11.370444   59356 out.go:177] * Verifying Kubernetes components...
	I1004 04:12:11.371969   59356 addons.go:510] duration metric: took 3.183426ms for enable addons: enabled=[]
	I1004 04:12:11.371983   59356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:12:11.519302   59356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:12:11.537493   59356 node_ready.go:35] waiting up to 6m0s for node "pause-353264" to be "Ready" ...
	I1004 04:12:11.540518   59356 node_ready.go:49] node "pause-353264" has status "Ready":"True"
	I1004 04:12:11.540558   59356 node_ready.go:38] duration metric: took 3.006339ms for node "pause-353264" to be "Ready" ...
	I1004 04:12:11.540569   59356 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:12:11.545071   59356 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gttvn" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.928381   59356 pod_ready.go:93] pod "coredns-7c65d6cfc9-gttvn" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:11.928412   59356 pod_ready.go:82] duration metric: took 383.31432ms for pod "coredns-7c65d6cfc9-gttvn" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.928426   59356 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:12.328082   59356 pod_ready.go:93] pod "etcd-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:12.328110   59356 pod_ready.go:82] duration metric: took 399.674453ms for pod "etcd-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:12.328123   59356 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:12.733710   59356 pod_ready.go:93] pod "kube-apiserver-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:12.733742   59356 pod_ready.go:82] duration metric: took 405.61084ms for pod "kube-apiserver-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:12.733756   59356 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:13.128025   59356 pod_ready.go:93] pod "kube-controller-manager-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:13.128054   59356 pod_ready.go:82] duration metric: took 394.288848ms for pod "kube-controller-manager-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:13.128068   59356 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tthhg" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:13.528315   59356 pod_ready.go:93] pod "kube-proxy-tthhg" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:13.528339   59356 pod_ready.go:82] duration metric: took 400.264199ms for pod "kube-proxy-tthhg" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:13.528349   59356 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:13.930187   59356 pod_ready.go:93] pod "kube-scheduler-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:13.930215   59356 pod_ready.go:82] duration metric: took 401.85941ms for pod "kube-scheduler-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:13.930224   59356 pod_ready.go:39] duration metric: took 2.389643751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:12:13.930238   59356 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:12:13.930288   59356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:12:13.946524   59356 api_server.go:72] duration metric: took 2.577784416s to wait for apiserver process to appear ...
	I1004 04:12:13.946559   59356 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:12:13.946584   59356 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1004 04:12:13.951589   59356 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1004 04:12:13.952990   59356 api_server.go:141] control plane version: v1.31.1
	I1004 04:12:13.953012   59356 api_server.go:131] duration metric: took 6.445157ms to wait for apiserver health ...
	I1004 04:12:13.953019   59356 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:12:14.130044   59356 system_pods.go:59] 6 kube-system pods found
	I1004 04:12:14.130074   59356 system_pods.go:61] "coredns-7c65d6cfc9-gttvn" [4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c] Running
	I1004 04:12:14.130079   59356 system_pods.go:61] "etcd-pause-353264" [834bfa30-5dd7-4d25-8331-2b9418027c01] Running
	I1004 04:12:14.130083   59356 system_pods.go:61] "kube-apiserver-pause-353264" [b0180bdc-2f96-4ea3-ada2-9e0f3251fbe5] Running
	I1004 04:12:14.130086   59356 system_pods.go:61] "kube-controller-manager-pause-353264" [da561ad5-cbdc-4e0a-b1cf-a99ed2605d14] Running
	I1004 04:12:14.130090   59356 system_pods.go:61] "kube-proxy-tthhg" [5b374015-3f42-42d6-8357-e27efe1a939a] Running
	I1004 04:12:14.130093   59356 system_pods.go:61] "kube-scheduler-pause-353264" [9b44baad-ee16-42bf-aa51-0810167533ff] Running
	I1004 04:12:14.130100   59356 system_pods.go:74] duration metric: took 177.074869ms to wait for pod list to return data ...
	I1004 04:12:14.130106   59356 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:12:14.327469   59356 default_sa.go:45] found service account: "default"
	I1004 04:12:14.327491   59356 default_sa.go:55] duration metric: took 197.380356ms for default service account to be created ...
	I1004 04:12:14.327508   59356 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:12:14.530370   59356 system_pods.go:86] 6 kube-system pods found
	I1004 04:12:14.530398   59356 system_pods.go:89] "coredns-7c65d6cfc9-gttvn" [4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c] Running
	I1004 04:12:14.530404   59356 system_pods.go:89] "etcd-pause-353264" [834bfa30-5dd7-4d25-8331-2b9418027c01] Running
	I1004 04:12:14.530408   59356 system_pods.go:89] "kube-apiserver-pause-353264" [b0180bdc-2f96-4ea3-ada2-9e0f3251fbe5] Running
	I1004 04:12:14.530413   59356 system_pods.go:89] "kube-controller-manager-pause-353264" [da561ad5-cbdc-4e0a-b1cf-a99ed2605d14] Running
	I1004 04:12:14.530417   59356 system_pods.go:89] "kube-proxy-tthhg" [5b374015-3f42-42d6-8357-e27efe1a939a] Running
	I1004 04:12:14.530421   59356 system_pods.go:89] "kube-scheduler-pause-353264" [9b44baad-ee16-42bf-aa51-0810167533ff] Running
	I1004 04:12:14.530427   59356 system_pods.go:126] duration metric: took 202.913767ms to wait for k8s-apps to be running ...
	I1004 04:12:14.530435   59356 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:12:14.530484   59356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:12:14.552933   59356 system_svc.go:56] duration metric: took 22.489797ms WaitForService to wait for kubelet
	I1004 04:12:14.552965   59356 kubeadm.go:582] duration metric: took 3.184228997s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:12:14.552987   59356 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:12:14.727510   59356 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:12:14.727541   59356 node_conditions.go:123] node cpu capacity is 2
	I1004 04:12:14.727552   59356 node_conditions.go:105] duration metric: took 174.56045ms to run NodePressure ...
	I1004 04:12:14.727563   59356 start.go:241] waiting for startup goroutines ...
	I1004 04:12:14.727569   59356 start.go:246] waiting for cluster config update ...
	I1004 04:12:14.727577   59356 start.go:255] writing updated cluster config ...
	I1004 04:12:14.727927   59356 ssh_runner.go:195] Run: rm -f paused
	I1004 04:12:14.785686   59356 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:12:14.787972   59356 out.go:177] * Done! kubectl is now configured to use "pause-353264" cluster and "default" namespace by default
	I1004 04:12:13.636943   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.637408   59686 main.go:141] libmachine: (stopped-upgrade-389737) Found IP for machine: 192.168.61.179
	I1004 04:12:13.637429   59686 main.go:141] libmachine: (stopped-upgrade-389737) Reserving static IP address...
	I1004 04:12:13.637456   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has current primary IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.637969   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "stopped-upgrade-389737", mac: "52:54:00:01:43:69", ip: "192.168.61.179"} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:13.637986   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | skip adding static IP to network mk-stopped-upgrade-389737 - found existing host DHCP lease matching {name: "stopped-upgrade-389737", mac: "52:54:00:01:43:69", ip: "192.168.61.179"}
	I1004 04:12:13.638023   59686 main.go:141] libmachine: (stopped-upgrade-389737) Reserved static IP address: 192.168.61.179
	I1004 04:12:13.638054   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | Getting to WaitForSSH function...
	I1004 04:12:13.638068   59686 main.go:141] libmachine: (stopped-upgrade-389737) Waiting for SSH to be available...
	I1004 04:12:13.640241   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.640628   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:13.640650   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.640791   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | Using SSH client type: external
	I1004 04:12:13.640832   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/stopped-upgrade-389737/id_rsa (-rw-------)
	I1004 04:12:13.640872   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/stopped-upgrade-389737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:12:13.640905   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | About to run SSH command:
	I1004 04:12:13.640915   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | exit 0
	I1004 04:12:13.728202   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | SSH cmd err, output: <nil>: 
	I1004 04:12:13.728616   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetConfigRaw
	I1004 04:12:13.729271   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetIP
	I1004 04:12:13.732411   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.732917   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:13.732941   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.733247   59686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/stopped-upgrade-389737/config.json ...
	I1004 04:12:13.733533   59686 machine.go:93] provisionDockerMachine start ...
	I1004 04:12:13.733559   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	I1004 04:12:13.733753   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:13.736689   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.737070   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:13.737114   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.737225   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:13.737550   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:13.737739   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:13.737897   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:13.738087   59686 main.go:141] libmachine: Using SSH client type: native
	I1004 04:12:13.738293   59686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I1004 04:12:13.738312   59686 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:12:13.847940   59686 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:12:13.847968   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetMachineName
	I1004 04:12:13.848229   59686 buildroot.go:166] provisioning hostname "stopped-upgrade-389737"
	I1004 04:12:13.848268   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetMachineName
	I1004 04:12:13.848431   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:13.851138   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.851691   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:13.851733   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.851855   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:13.852036   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:13.852398   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:13.852573   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:13.852751   59686 main.go:141] libmachine: Using SSH client type: native
	I1004 04:12:13.852919   59686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I1004 04:12:13.852931   59686 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-389737 && echo "stopped-upgrade-389737" | sudo tee /etc/hostname
	I1004 04:12:13.972119   59686 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-389737
	
	I1004 04:12:13.972153   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:13.974971   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.975454   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:13.975479   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.975686   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:13.975895   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:13.976052   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:13.976197   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:13.976431   59686 main.go:141] libmachine: Using SSH client type: native
	I1004 04:12:13.976599   59686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I1004 04:12:13.976615   59686 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-389737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-389737/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-389737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:12:14.094892   59686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:12:14.094926   59686 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:12:14.094951   59686 buildroot.go:174] setting up certificates
	I1004 04:12:14.094962   59686 provision.go:84] configureAuth start
	I1004 04:12:14.094994   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetMachineName
	I1004 04:12:14.095288   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetIP
	I1004 04:12:14.098085   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.098480   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:14.098519   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.098696   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:14.100959   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.101251   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:14.101273   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.101398   59686 provision.go:143] copyHostCerts
	I1004 04:12:14.101464   59686 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:12:14.101476   59686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:12:14.101552   59686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:12:14.101668   59686 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:12:14.101679   59686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:12:14.101719   59686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:12:14.101803   59686 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:12:14.101812   59686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:12:14.101850   59686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:12:14.101937   59686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-389737 san=[127.0.0.1 192.168.61.179 localhost minikube stopped-upgrade-389737]
	I1004 04:12:14.459060   59686 provision.go:177] copyRemoteCerts
	I1004 04:12:14.459130   59686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:12:14.459154   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:14.462494   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.462899   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:14.462937   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.463105   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:14.463372   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:14.463530   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:14.463745   59686 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/stopped-upgrade-389737/id_rsa Username:docker}
	I1004 04:12:14.547345   59686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:12:14.569657   59686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1004 04:12:14.593247   59686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 04:12:14.614424   59686 provision.go:87] duration metric: took 519.451326ms to configureAuth
	I1004 04:12:14.614449   59686 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:12:14.614612   59686 config.go:182] Loaded profile config "stopped-upgrade-389737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1004 04:12:14.614680   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:14.617416   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.617925   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:14.617961   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.618196   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:14.618454   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:14.618711   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:14.618950   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:14.619179   59686 main.go:141] libmachine: Using SSH client type: native
	I1004 04:12:14.619366   59686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I1004 04:12:14.619382   59686 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:12:14.908775   59686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:12:14.908799   59686 machine.go:96] duration metric: took 1.175248738s to provisionDockerMachine
	I1004 04:12:14.908812   59686 start.go:293] postStartSetup for "stopped-upgrade-389737" (driver="kvm2")
	I1004 04:12:14.908826   59686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:12:14.908851   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	I1004 04:12:14.909196   59686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:12:14.909223   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:14.911579   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.912014   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:14.912042   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.912141   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:14.912313   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:14.912474   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:14.912613   59686 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/stopped-upgrade-389737/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.451888908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015135451862910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b30de7f5-a131-42d1-9aca-47c97c4a5fab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.452610172Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b6fbfdc-4b62-4a54-977b-781dc9ae6db1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.452681842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b6fbfdc-4b62-4a54-977b-781dc9ae6db1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.453102233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:740199b9cf8f7d44945f1595c6bc95b9d4a5f506de620b60b669d7350ee8c288,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015114992781418,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fcbb4a0c9f819570f35be774064d1789878d712f59584bf47095c8a2b8b9c2,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015114987863816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c9e66da51164aa6a44f8e0dffdbb1958d7eeecbe69c9855b5c62e85b5ca7731,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015114976853648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca0d560be7008050b1fbb40a47846e30d12910c212d181e8a1107fe9038e8da,PodSandboxId:d56131f182017e3dab1fe7c6e6b952904badf8b486d3b2374fa2641261d8217a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015101873557801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c7e7e82971fb76354fb48da635c71754db693ee4946a616c23668e9da54086,PodSandboxId:7ba8d99af7074e6ee7a539c38638d2b878cfc1e624e622e71a2cb26222aa91d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015102500344006,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942a33b53c14884a730c0d9372ec5780215af6816181a9f78c41f6daceb8793c,PodSandboxId:5df0266451547af38f201fe742368c81b995786205cfac73d19907087b499931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015101752070190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d25766843c897790bce723482daabdb70fdb342b72df7f64ee2248db1c95117,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728015101643719572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880e75697720a668948a7e23ff0534f175e214479c40dc90fa96079410f83512,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015101521780303,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aff26b1ec7bb1ec30ebf45e6a1956367f821edea625ad7dcc9b2471118c1503,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728015101595742138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36279bf5603bc0e718742b055d454240e5aac0045b783d2c0bda216288633d35,PodSandboxId:f76c8e108a6af8e4d582adf7aabe6b07bc2c05ac2af846f7efa2b9f0b302a090,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728015065696616512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7934b01963c4be725cd65138a3481e7a73e7902fe8f0e73707284f44de4f47ef,PodSandboxId:7440bcfcd066e374a6b056922273048c27bc180c0671eebde598a674df63252b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728015065304955485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca00a9c6d44e15b6fd112c388a715d1ba7ec29eb6d9bf7442109db7344a9a29,PodSandboxId:857ce4225431f5a464d525007e5b848455dc4abd9def63613b93a869e212ca91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728015054457422007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b6fbfdc-4b62-4a54-977b-781dc9ae6db1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.501733912Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9248673e-5571-4233-8af4-1969e7633f28 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.501853524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9248673e-5571-4233-8af4-1969e7633f28 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.503338669Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8da3fa1-222a-43df-b0b6-613ce0d9d1f6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.504112333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015135504078715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8da3fa1-222a-43df-b0b6-613ce0d9d1f6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.505153784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8dc47609-006a-41ac-9649-60c11f37186b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.505229398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8dc47609-006a-41ac-9649-60c11f37186b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.505673784Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:740199b9cf8f7d44945f1595c6bc95b9d4a5f506de620b60b669d7350ee8c288,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015114992781418,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fcbb4a0c9f819570f35be774064d1789878d712f59584bf47095c8a2b8b9c2,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015114987863816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c9e66da51164aa6a44f8e0dffdbb1958d7eeecbe69c9855b5c62e85b5ca7731,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015114976853648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca0d560be7008050b1fbb40a47846e30d12910c212d181e8a1107fe9038e8da,PodSandboxId:d56131f182017e3dab1fe7c6e6b952904badf8b486d3b2374fa2641261d8217a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015101873557801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c7e7e82971fb76354fb48da635c71754db693ee4946a616c23668e9da54086,PodSandboxId:7ba8d99af7074e6ee7a539c38638d2b878cfc1e624e622e71a2cb26222aa91d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015102500344006,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942a33b53c14884a730c0d9372ec5780215af6816181a9f78c41f6daceb8793c,PodSandboxId:5df0266451547af38f201fe742368c81b995786205cfac73d19907087b499931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015101752070190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d25766843c897790bce723482daabdb70fdb342b72df7f64ee2248db1c95117,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728015101643719572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880e75697720a668948a7e23ff0534f175e214479c40dc90fa96079410f83512,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015101521780303,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aff26b1ec7bb1ec30ebf45e6a1956367f821edea625ad7dcc9b2471118c1503,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728015101595742138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36279bf5603bc0e718742b055d454240e5aac0045b783d2c0bda216288633d35,PodSandboxId:f76c8e108a6af8e4d582adf7aabe6b07bc2c05ac2af846f7efa2b9f0b302a090,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728015065696616512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7934b01963c4be725cd65138a3481e7a73e7902fe8f0e73707284f44de4f47ef,PodSandboxId:7440bcfcd066e374a6b056922273048c27bc180c0671eebde598a674df63252b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728015065304955485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca00a9c6d44e15b6fd112c388a715d1ba7ec29eb6d9bf7442109db7344a9a29,PodSandboxId:857ce4225431f5a464d525007e5b848455dc4abd9def63613b93a869e212ca91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728015054457422007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8dc47609-006a-41ac-9649-60c11f37186b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.559415227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1cf0311-0c4a-4ca1-ba58-3c041712f702 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.559575270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1cf0311-0c4a-4ca1-ba58-3c041712f702 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.560794571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0c55fc2-5c18-465f-b5d6-3dbf4483aaff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.561215378Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015135561190194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0c55fc2-5c18-465f-b5d6-3dbf4483aaff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.561936963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1c1c4d3-ed52-45d0-ab6f-0cfc818c9092 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.562012433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1c1c4d3-ed52-45d0-ab6f-0cfc818c9092 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.562270562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:740199b9cf8f7d44945f1595c6bc95b9d4a5f506de620b60b669d7350ee8c288,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015114992781418,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fcbb4a0c9f819570f35be774064d1789878d712f59584bf47095c8a2b8b9c2,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015114987863816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c9e66da51164aa6a44f8e0dffdbb1958d7eeecbe69c9855b5c62e85b5ca7731,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015114976853648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca0d560be7008050b1fbb40a47846e30d12910c212d181e8a1107fe9038e8da,PodSandboxId:d56131f182017e3dab1fe7c6e6b952904badf8b486d3b2374fa2641261d8217a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015101873557801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c7e7e82971fb76354fb48da635c71754db693ee4946a616c23668e9da54086,PodSandboxId:7ba8d99af7074e6ee7a539c38638d2b878cfc1e624e622e71a2cb26222aa91d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015102500344006,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942a33b53c14884a730c0d9372ec5780215af6816181a9f78c41f6daceb8793c,PodSandboxId:5df0266451547af38f201fe742368c81b995786205cfac73d19907087b499931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015101752070190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d25766843c897790bce723482daabdb70fdb342b72df7f64ee2248db1c95117,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728015101643719572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880e75697720a668948a7e23ff0534f175e214479c40dc90fa96079410f83512,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015101521780303,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aff26b1ec7bb1ec30ebf45e6a1956367f821edea625ad7dcc9b2471118c1503,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728015101595742138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36279bf5603bc0e718742b055d454240e5aac0045b783d2c0bda216288633d35,PodSandboxId:f76c8e108a6af8e4d582adf7aabe6b07bc2c05ac2af846f7efa2b9f0b302a090,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728015065696616512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7934b01963c4be725cd65138a3481e7a73e7902fe8f0e73707284f44de4f47ef,PodSandboxId:7440bcfcd066e374a6b056922273048c27bc180c0671eebde598a674df63252b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728015065304955485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca00a9c6d44e15b6fd112c388a715d1ba7ec29eb6d9bf7442109db7344a9a29,PodSandboxId:857ce4225431f5a464d525007e5b848455dc4abd9def63613b93a869e212ca91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728015054457422007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1c1c4d3-ed52-45d0-ab6f-0cfc818c9092 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.609388308Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92b4acff-48d5-43f8-9c91-78f4829a1569 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.609608098Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92b4acff-48d5-43f8-9c91-78f4829a1569 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.611592916Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dbb46501-f198-4368-8087-e562800e3bad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.612344889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015135612310520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dbb46501-f198-4368-8087-e562800e3bad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.613403579Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7bff647-a3d6-49fe-916f-b7563b508836 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.613554277Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7bff647-a3d6-49fe-916f-b7563b508836 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:15 pause-353264 crio[2077]: time="2024-10-04 04:12:15.613943906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:740199b9cf8f7d44945f1595c6bc95b9d4a5f506de620b60b669d7350ee8c288,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015114992781418,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fcbb4a0c9f819570f35be774064d1789878d712f59584bf47095c8a2b8b9c2,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015114987863816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c9e66da51164aa6a44f8e0dffdbb1958d7eeecbe69c9855b5c62e85b5ca7731,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015114976853648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca0d560be7008050b1fbb40a47846e30d12910c212d181e8a1107fe9038e8da,PodSandboxId:d56131f182017e3dab1fe7c6e6b952904badf8b486d3b2374fa2641261d8217a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015101873557801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c7e7e82971fb76354fb48da635c71754db693ee4946a616c23668e9da54086,PodSandboxId:7ba8d99af7074e6ee7a539c38638d2b878cfc1e624e622e71a2cb26222aa91d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015102500344006,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942a33b53c14884a730c0d9372ec5780215af6816181a9f78c41f6daceb8793c,PodSandboxId:5df0266451547af38f201fe742368c81b995786205cfac73d19907087b499931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015101752070190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d25766843c897790bce723482daabdb70fdb342b72df7f64ee2248db1c95117,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728015101643719572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880e75697720a668948a7e23ff0534f175e214479c40dc90fa96079410f83512,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015101521780303,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aff26b1ec7bb1ec30ebf45e6a1956367f821edea625ad7dcc9b2471118c1503,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728015101595742138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36279bf5603bc0e718742b055d454240e5aac0045b783d2c0bda216288633d35,PodSandboxId:f76c8e108a6af8e4d582adf7aabe6b07bc2c05ac2af846f7efa2b9f0b302a090,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728015065696616512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7934b01963c4be725cd65138a3481e7a73e7902fe8f0e73707284f44de4f47ef,PodSandboxId:7440bcfcd066e374a6b056922273048c27bc180c0671eebde598a674df63252b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728015065304955485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca00a9c6d44e15b6fd112c388a715d1ba7ec29eb6d9bf7442109db7344a9a29,PodSandboxId:857ce4225431f5a464d525007e5b848455dc4abd9def63613b93a869e212ca91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728015054457422007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7bff647-a3d6-49fe-916f-b7563b508836 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	740199b9cf8f7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   20 seconds ago       Running             kube-controller-manager   2                   9f247ffbb3b9f       kube-controller-manager-pause-353264
	46fcbb4a0c9f8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   20 seconds ago       Running             etcd                      2                   e0d062061752e       etcd-pause-353264
	9c9e66da51164       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   20 seconds ago       Running             kube-apiserver            2                   e7b09bd196a35       kube-apiserver-pause-353264
	61c7e7e82971f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   33 seconds ago       Running             coredns                   1                   7ba8d99af7074       coredns-7c65d6cfc9-gttvn
	0ca0d560be700       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   33 seconds ago       Running             kube-proxy                1                   d56131f182017       kube-proxy-tthhg
	942a33b53c148       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   33 seconds ago       Running             kube-scheduler            1                   5df0266451547       kube-scheduler-pause-353264
	7d25766843c89       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   34 seconds ago       Exited              etcd                      1                   e0d062061752e       etcd-pause-353264
	9aff26b1ec7bb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   34 seconds ago       Exited              kube-controller-manager   1                   9f247ffbb3b9f       kube-controller-manager-pause-353264
	880e75697720a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   34 seconds ago       Exited              kube-apiserver            1                   e7b09bd196a35       kube-apiserver-pause-353264
	36279bf5603bc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   f76c8e108a6af       coredns-7c65d6cfc9-gttvn
	7934b01963c4b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                0                   7440bcfcd066e       kube-proxy-tthhg
	dca00a9c6d44e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Exited              kube-scheduler            0                   857ce4225431f       kube-scheduler-pause-353264
	
	
	==> coredns [36279bf5603bc0e718742b055d454240e5aac0045b783d2c0bda216288633d35] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59030 - 13730 "HINFO IN 5594241315810035337.6325755141832387909. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025692232s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1899728852]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 04:11:05.949) (total time: 21621ms):
	Trace[1899728852]: [21.621358383s] [21.621358383s] END
	[INFO] plugin/kubernetes: Trace[1772467420]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 04:11:05.950) (total time: 21620ms):
	Trace[1772467420]: [21.620474782s] [21.620474782s] END
	[INFO] plugin/kubernetes: Trace[1812294083]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 04:11:05.948) (total time: 21622ms):
	Trace[1812294083]: [21.622746131s] [21.622746131s] END
	
	
	==> coredns [61c7e7e82971fb76354fb48da635c71754db693ee4946a616c23668e9da54086] <==
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41253 - 4363 "HINFO IN 4737419583035021761.7604275016666083342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014492725s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[829427277]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 04:11:42.717) (total time: 10000ms):
	Trace[829427277]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (04:11:52.717)
	Trace[829427277]: [10.000924048s] [10.000924048s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[428170624]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 04:11:42.716) (total time: 10001ms):
	Trace[428170624]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (04:11:52.717)
	Trace[428170624]: [10.001292697s] [10.001292697s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1549162769]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 04:11:42.716) (total time: 10002ms):
	Trace[1549162769]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (04:11:52.718)
	Trace[1549162769]: [10.00262831s] [10.00262831s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-353264
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-353264
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=pause-353264
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T04_11_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 04:10:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-353264
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 04:12:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 04:11:58 +0000   Fri, 04 Oct 2024 04:10:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 04:11:58 +0000   Fri, 04 Oct 2024 04:10:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 04:11:58 +0000   Fri, 04 Oct 2024 04:10:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 04:11:58 +0000   Fri, 04 Oct 2024 04:11:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    pause-353264
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 963a32b70e7a47cbb46f88834fbc654b
	  System UUID:                963a32b7-0e7a-47cb-b46f-88834fbc654b
	  Boot ID:                    164b43a4-1648-4106-8485-8951b89d8fac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-gttvn                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     70s
	  kube-system                 etcd-pause-353264                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         75s
	  kube-system                 kube-apiserver-pause-353264             250m (12%)    0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-controller-manager-pause-353264    200m (10%)    0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-proxy-tthhg                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-scheduler-pause-353264             100m (5%)     0 (0%)      0 (0%)           0 (0%)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 70s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     75s                kubelet          Node pause-353264 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  75s                kubelet          Node pause-353264 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s                kubelet          Node pause-353264 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                75s                kubelet          Node pause-353264 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           72s                node-controller  Node pause-353264 event: Registered Node pause-353264 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node pause-353264 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node pause-353264 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node pause-353264 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-353264 event: Registered Node pause-353264 in Controller
	
	
	==> dmesg <==
	[ +10.384258] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.084087] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067692] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.223241] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.127671] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.316092] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +4.419986] systemd-fstab-generator[749]: Ignoring "noauto" option for root device
	[  +0.064653] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.590181] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +1.417171] kauditd_printk_skb: 87 callbacks suppressed
	[  +5.151674] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[Oct 4 04:11] systemd-fstab-generator[1341]: Ignoring "noauto" option for root device
	[  +1.103022] kauditd_printk_skb: 43 callbacks suppressed
	[ +28.465485] systemd-fstab-generator[2002]: Ignoring "noauto" option for root device
	[  +0.072416] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.074168] systemd-fstab-generator[2014]: Ignoring "noauto" option for root device
	[  +0.198460] systemd-fstab-generator[2028]: Ignoring "noauto" option for root device
	[  +0.143302] systemd-fstab-generator[2040]: Ignoring "noauto" option for root device
	[  +0.337004] systemd-fstab-generator[2069]: Ignoring "noauto" option for root device
	[  +6.133697] systemd-fstab-generator[2186]: Ignoring "noauto" option for root device
	[  +0.072910] kauditd_printk_skb: 100 callbacks suppressed
	[ +13.447622] systemd-fstab-generator[2929]: Ignoring "noauto" option for root device
	[  +0.086361] kauditd_printk_skb: 87 callbacks suppressed
	[Oct 4 04:12] kauditd_printk_skb: 33 callbacks suppressed
	[ +10.040317] systemd-fstab-generator[3246]: Ignoring "noauto" option for root device
	
	
	==> etcd [46fcbb4a0c9f819570f35be774064d1789878d712f59584bf47095c8a2b8b9c2] <==
	{"level":"info","ts":"2024-10-04T04:11:55.328115Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","added-peer-id":"903e0dada8362847","added-peer-peer-urls":["https://192.168.39.41:2380"]}
	{"level":"info","ts":"2024-10-04T04:11:55.328233Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:11:55.328276Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:11:55.325415Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:11:55.341333Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-04T04:11:55.341648Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"903e0dada8362847","initial-advertise-peer-urls":["https://192.168.39.41:2380"],"listen-peer-urls":["https://192.168.39.41:2380"],"advertise-client-urls":["https://192.168.39.41:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.41:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-04T04:11:55.341703Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-04T04:11:55.341820Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-10-04T04:11:55.341847Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-10-04T04:11:56.391283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-04T04:11:56.391344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-04T04:11:56.391375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 received MsgPreVoteResp from 903e0dada8362847 at term 2"}
	{"level":"info","ts":"2024-10-04T04:11:56.391388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became candidate at term 3"}
	{"level":"info","ts":"2024-10-04T04:11:56.391394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 received MsgVoteResp from 903e0dada8362847 at term 3"}
	{"level":"info","ts":"2024-10-04T04:11:56.391402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became leader at term 3"}
	{"level":"info","ts":"2024-10-04T04:11:56.391409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 903e0dada8362847 elected leader 903e0dada8362847 at term 3"}
	{"level":"info","ts":"2024-10-04T04:11:56.396406Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"903e0dada8362847","local-member-attributes":"{Name:pause-353264 ClientURLs:[https://192.168.39.41:2379]}","request-path":"/0/members/903e0dada8362847/attributes","cluster-id":"b5cacf25c2f2940e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T04:11:56.396423Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:11:56.396442Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:11:56.397120Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T04:11:56.397179Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T04:11:56.397820Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:11:56.397857Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:11:56.398717Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T04:11:56.398838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.41:2379"}
	
	
	==> etcd [7d25766843c897790bce723482daabdb70fdb342b72df7f64ee2248db1c95117] <==
	{"level":"warn","ts":"2024-10-04T04:11:42.150185Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-10-04T04:11:42.150416Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.41:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.39.41:2380","--initial-cluster=pause-353264=https://192.168.39.41:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.41:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.41:2380","--name=pause-353264","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-c
a-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-10-04T04:11:42.154615Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-10-04T04:11:42.155501Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-10-04T04:11:42.155547Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.41:2380"]}
	{"level":"info","ts":"2024-10-04T04:11:42.155610Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-04T04:11:42.157384Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.41:2379"]}
	{"level":"info","ts":"2024-10-04T04:11:42.157613Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-353264","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.41:2380"],"listen-peer-urls":["https://192.168.39.41:2380"],"advertise-client-urls":["https://192.168.39.41:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.41:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluste
r-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-10-04T04:11:42.268545Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"110.704744ms"}
	{"level":"info","ts":"2024-10-04T04:11:42.328969Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-10-04T04:11:42.389188Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","commit-index":392}
	{"level":"info","ts":"2024-10-04T04:11:42.389287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 switched to configuration voters=()"}
	{"level":"info","ts":"2024-10-04T04:11:42.389363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became follower at term 2"}
	{"level":"info","ts":"2024-10-04T04:11:42.389380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 903e0dada8362847 [peers: [], term: 2, commit: 392, applied: 0, lastindex: 392, lastterm: 2]"}
	{"level":"warn","ts":"2024-10-04T04:11:42.401246Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	
	
	==> kernel <==
	 04:12:16 up 1 min,  0 users,  load average: 1.41, 0.46, 0.16
	Linux pause-353264 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [880e75697720a668948a7e23ff0534f175e214479c40dc90fa96079410f83512] <==
	I1004 04:11:42.102754       1 options.go:228] external host was not specified, using 192.168.39.41
	I1004 04:11:42.140265       1 server.go:142] Version: v1.31.1
	I1004 04:11:42.140306       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1004 04:11:42.992153       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:42.992351       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1004 04:11:42.992418       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1004 04:11:42.999283       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1004 04:11:42.999373       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1004 04:11:42.999544       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1004 04:11:42.999763       1 instance.go:232] Using reconciler: lease
	W1004 04:11:43.000760       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:43.993582       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:43.993674       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:44.001401       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:45.389316       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:45.485705       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:45.889356       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:47.833949       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:48.017378       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:48.715033       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:52.122000       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:52.419948       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9c9e66da51164aa6a44f8e0dffdbb1958d7eeecbe69c9855b5c62e85b5ca7731] <==
	I1004 04:11:57.950758       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1004 04:11:57.951096       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1004 04:11:57.951173       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 04:11:57.951180       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 04:11:57.951277       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1004 04:11:57.951366       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1004 04:11:57.973635       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1004 04:11:57.973739       1 aggregator.go:171] initial CRD sync complete...
	I1004 04:11:57.973764       1 autoregister_controller.go:144] Starting autoregister controller
	I1004 04:11:57.973819       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 04:11:57.973849       1 cache.go:39] Caches are synced for autoregister controller
	I1004 04:11:57.993782       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1004 04:11:57.994133       1 shared_informer.go:320] Caches are synced for configmaps
	E1004 04:11:58.007234       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1004 04:11:58.015549       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1004 04:11:58.015656       1 policy_source.go:224] refreshing policies
	I1004 04:11:58.032873       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 04:11:58.798595       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 04:11:59.639383       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 04:11:59.657817       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 04:11:59.699885       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 04:11:59.740313       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 04:11:59.749593       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 04:12:01.355580       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 04:12:01.661834       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [740199b9cf8f7d44945f1595c6bc95b9d4a5f506de620b60b669d7350ee8c288] <==
	I1004 04:12:01.257837       1 shared_informer.go:320] Caches are synced for namespace
	I1004 04:12:01.262549       1 shared_informer.go:320] Caches are synced for service account
	I1004 04:12:01.266543       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1004 04:12:01.301964       1 shared_informer.go:320] Caches are synced for disruption
	I1004 04:12:01.302076       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1004 04:12:01.302256       1 shared_informer.go:320] Caches are synced for PVC protection
	I1004 04:12:01.302265       1 shared_informer.go:320] Caches are synced for GC
	I1004 04:12:01.302274       1 shared_informer.go:320] Caches are synced for deployment
	I1004 04:12:01.303331       1 shared_informer.go:320] Caches are synced for cronjob
	I1004 04:12:01.334129       1 shared_informer.go:320] Caches are synced for endpoint
	I1004 04:12:01.400078       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1004 04:12:01.409916       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1004 04:12:01.429060       1 shared_informer.go:320] Caches are synced for daemon sets
	I1004 04:12:01.458817       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 04:12:01.495698       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 04:12:01.501761       1 shared_informer.go:320] Caches are synced for stateful set
	I1004 04:12:01.565861       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="299.20903ms"
	I1004 04:12:01.565964       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="51.56µs"
	I1004 04:12:01.909447       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 04:12:01.921774       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 04:12:01.921831       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 04:12:07.138865       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="20.079359ms"
	I1004 04:12:07.138973       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="47.513µs"
	I1004 04:12:07.166867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="15.758775ms"
	I1004 04:12:07.167545       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="278.295µs"
	
	
	==> kube-controller-manager [9aff26b1ec7bb1ec30ebf45e6a1956367f821edea625ad7dcc9b2471118c1503] <==
	
	
	==> kube-proxy [0ca0d560be7008050b1fbb40a47846e30d12910c212d181e8a1107fe9038e8da] <==
	 >
	E1004 04:11:43.058722       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 04:11:53.850106       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-353264\": dial tcp 192.168.39.41:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.41:39838->192.168.39.41:8443: read: connection reset by peer"
	E1004 04:11:54.964970       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-353264\": dial tcp 192.168.39.41:8443: connect: connection refused"
	I1004 04:11:57.960393       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.41"]
	E1004 04:11:57.960672       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 04:11:58.006387       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 04:11:58.006518       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 04:11:58.006551       1 server_linux.go:169] "Using iptables Proxier"
	I1004 04:11:58.010828       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 04:11:58.011121       1 server.go:483] "Version info" version="v1.31.1"
	I1004 04:11:58.011150       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:11:58.013207       1 config.go:199] "Starting service config controller"
	I1004 04:11:58.013264       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 04:11:58.013293       1 config.go:105] "Starting endpoint slice config controller"
	I1004 04:11:58.013297       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 04:11:58.014419       1 config.go:328] "Starting node config controller"
	I1004 04:11:58.014509       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 04:11:58.114084       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 04:11:58.114202       1 shared_informer.go:320] Caches are synced for service config
	I1004 04:11:58.114844       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7934b01963c4be725cd65138a3481e7a73e7902fe8f0e73707284f44de4f47ef] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 04:11:05.670069       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 04:11:05.695878       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.41"]
	E1004 04:11:05.696135       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 04:11:05.750402       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 04:11:05.750651       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 04:11:05.750730       1 server_linux.go:169] "Using iptables Proxier"
	I1004 04:11:05.758677       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 04:11:05.759793       1 server.go:483] "Version info" version="v1.31.1"
	I1004 04:11:05.759827       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:11:05.763934       1 config.go:199] "Starting service config controller"
	I1004 04:11:05.764397       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 04:11:05.764681       1 config.go:105] "Starting endpoint slice config controller"
	I1004 04:11:05.764708       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 04:11:05.766768       1 config.go:328] "Starting node config controller"
	I1004 04:11:05.766794       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 04:11:05.865213       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 04:11:05.865320       1 shared_informer.go:320] Caches are synced for service config
	I1004 04:11:05.868420       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [942a33b53c14884a730c0d9372ec5780215af6816181a9f78c41f6daceb8793c] <==
	W1004 04:11:55.238598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.41:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E1004 04:11:55.238661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.41:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.41:8443: connect: connection refused" logger="UnhandledError"
	W1004 04:11:57.881949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 04:11:57.882016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.882109       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 04:11:57.882138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.882238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 04:11:57.882296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.882908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 04:11:57.882989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.883161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 04:11:57.883821       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.884157       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 04:11:57.885086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.886698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 04:11:57.887574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.887850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 04:11:57.889551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.888636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 04:11:57.889670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.888647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 04:11:57.889731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.888918       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 04:11:57.889783       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1004 04:12:00.754569       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [dca00a9c6d44e15b6fd112c388a715d1ba7ec29eb6d9bf7442109db7344a9a29] <==
	E1004 04:10:58.186347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.235774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 04:10:58.235825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.252424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 04:10:58.252532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.282552       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 04:10:58.282613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.283837       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 04:10:58.283900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.284106       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 04:10:58.284152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.376058       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 04:10:58.376113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.445078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 04:10:58.445151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.488750       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 04:10:58.488810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.568123       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 04:10:58.568182       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1004 04:10:58.609348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 04:10:58.609417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.683733       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 04:10:58.684258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1004 04:11:01.202541       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1004 04:11:27.571056       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 04 04:11:54 pause-353264 kubelet[2936]: E1004 04:11:54.656339    2936 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-353264?timeout=10s\": dial tcp 192.168.39.41:8443: connect: connection refused" interval="400ms"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: I1004 04:11:54.757364    2936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ac1c284d50647f044dc4b553a259a6e-ca-certs\") pod \"kube-apiserver-pause-353264\" (UID: \"2ac1c284d50647f044dc4b553a259a6e\") " pod="kube-system/kube-apiserver-pause-353264"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: I1004 04:11:54.757426    2936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ac1c284d50647f044dc4b553a259a6e-k8s-certs\") pod \"kube-apiserver-pause-353264\" (UID: \"2ac1c284d50647f044dc4b553a259a6e\") " pod="kube-system/kube-apiserver-pause-353264"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: I1004 04:11:54.757551    2936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ac1c284d50647f044dc4b553a259a6e-usr-share-ca-certificates\") pod \"kube-apiserver-pause-353264\" (UID: \"2ac1c284d50647f044dc4b553a259a6e\") " pod="kube-system/kube-apiserver-pause-353264"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: I1004 04:11:54.831683    2936 kubelet_node_status.go:72] "Attempting to register node" node="pause-353264"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: E1004 04:11:54.832668    2936 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.41:8443: connect: connection refused" node="pause-353264"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: I1004 04:11:54.952216    2936 scope.go:117] "RemoveContainer" containerID="7d25766843c897790bce723482daabdb70fdb342b72df7f64ee2248db1c95117"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: I1004 04:11:54.952876    2936 scope.go:117] "RemoveContainer" containerID="9aff26b1ec7bb1ec30ebf45e6a1956367f821edea625ad7dcc9b2471118c1503"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: I1004 04:11:54.952966    2936 scope.go:117] "RemoveContainer" containerID="880e75697720a668948a7e23ff0534f175e214479c40dc90fa96079410f83512"
	Oct 04 04:11:55 pause-353264 kubelet[2936]: E1004 04:11:55.058415    2936 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-353264?timeout=10s\": dial tcp 192.168.39.41:8443: connect: connection refused" interval="800ms"
	Oct 04 04:11:55 pause-353264 kubelet[2936]: I1004 04:11:55.234849    2936 kubelet_node_status.go:72] "Attempting to register node" node="pause-353264"
	Oct 04 04:11:55 pause-353264 kubelet[2936]: E1004 04:11:55.235813    2936 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.41:8443: connect: connection refused" node="pause-353264"
	Oct 04 04:11:56 pause-353264 kubelet[2936]: I1004 04:11:56.038109    2936 kubelet_node_status.go:72] "Attempting to register node" node="pause-353264"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.046145    2936 kubelet_node_status.go:111] "Node was previously registered" node="pause-353264"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.046261    2936 kubelet_node_status.go:75] "Successfully registered node" node="pause-353264"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.046293    2936 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.047146    2936 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.428698    2936 apiserver.go:52] "Watching apiserver"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.451951    2936 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.499189    2936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b374015-3f42-42d6-8357-e27efe1a939a-xtables-lock\") pod \"kube-proxy-tthhg\" (UID: \"5b374015-3f42-42d6-8357-e27efe1a939a\") " pod="kube-system/kube-proxy-tthhg"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.499347    2936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b374015-3f42-42d6-8357-e27efe1a939a-lib-modules\") pod \"kube-proxy-tthhg\" (UID: \"5b374015-3f42-42d6-8357-e27efe1a939a\") " pod="kube-system/kube-proxy-tthhg"
	Oct 04 04:12:04 pause-353264 kubelet[2936]: E1004 04:12:04.535605    2936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015124535088018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:12:04 pause-353264 kubelet[2936]: E1004 04:12:04.535657    2936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015124535088018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:12:14 pause-353264 kubelet[2936]: E1004 04:12:14.537264    2936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015134536879226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:12:14 pause-353264 kubelet[2936]: E1004 04:12:14.537289    2936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015134536879226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-353264 -n pause-353264
helpers_test.go:261: (dbg) Run:  kubectl --context pause-353264 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-353264 -n pause-353264
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-353264 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-353264 logs -n 25: (1.647339539s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-204413 sudo                  | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo                  | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo cat              | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo cat              | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo                  | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo                  | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo                  | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo find             | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-204413 sudo crio             | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-204413                       | cilium-204413             | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC | 04 Oct 24 04:07 UTC |
	| start   | -p force-systemd-flag-519066           | force-systemd-flag-519066 | jenkins | v1.34.0 | 04 Oct 24 04:07 UTC | 04 Oct 24 04:09 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p offline-crio-329336                 | offline-crio-329336       | jenkins | v1.34.0 | 04 Oct 24 04:08 UTC | 04 Oct 24 04:08 UTC |
	| start   | -p cert-expiration-363290              | cert-expiration-363290    | jenkins | v1.34.0 | 04 Oct 24 04:08 UTC | 04 Oct 24 04:10 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-391967            | force-systemd-env-391967  | jenkins | v1.34.0 | 04 Oct 24 04:09 UTC | 04 Oct 24 04:09 UTC |
	| start   | -p cert-options-756541                 | cert-options-756541       | jenkins | v1.34.0 | 04 Oct 24 04:09 UTC | 04 Oct 24 04:10 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-519066 ssh cat      | force-systemd-flag-519066 | jenkins | v1.34.0 | 04 Oct 24 04:09 UTC | 04 Oct 24 04:09 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-519066           | force-systemd-flag-519066 | jenkins | v1.34.0 | 04 Oct 24 04:09 UTC | 04 Oct 24 04:09 UTC |
	| start   | -p pause-353264 --memory=2048          | pause-353264              | jenkins | v1.34.0 | 04 Oct 24 04:09 UTC | 04 Oct 24 04:11 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-756541 ssh                | cert-options-756541       | jenkins | v1.34.0 | 04 Oct 24 04:10 UTC | 04 Oct 24 04:10 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-756541 -- sudo         | cert-options-756541       | jenkins | v1.34.0 | 04 Oct 24 04:10 UTC | 04 Oct 24 04:10 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-756541                 | cert-options-756541       | jenkins | v1.34.0 | 04 Oct 24 04:10 UTC | 04 Oct 24 04:10 UTC |
	| start   | -p stopped-upgrade-389737              | minikube                  | jenkins | v1.26.0 | 04 Oct 24 04:10 UTC | 04 Oct 24 04:11 UTC |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-353264                        | pause-353264              | jenkins | v1.34.0 | 04 Oct 24 04:11 UTC | 04 Oct 24 04:12 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-389737 stop            | minikube                  | jenkins | v1.26.0 | 04 Oct 24 04:11 UTC | 04 Oct 24 04:11 UTC |
	| start   | -p stopped-upgrade-389737              | stopped-upgrade-389737    | jenkins | v1.34.0 | 04 Oct 24 04:11 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 04:11:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 04:11:54.923086   59686 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:11:54.923193   59686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:11:54.923202   59686 out.go:358] Setting ErrFile to fd 2...
	I1004 04:11:54.923206   59686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:11:54.923386   59686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:11:54.923916   59686 out.go:352] Setting JSON to false
	I1004 04:11:54.924792   59686 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6860,"bootTime":1728008255,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:11:54.924884   59686 start.go:139] virtualization: kvm guest
	I1004 04:11:54.926766   59686 out.go:177] * [stopped-upgrade-389737] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:11:54.927932   59686 notify.go:220] Checking for updates...
	I1004 04:11:54.927935   59686 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:11:54.929060   59686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:11:54.930375   59686 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:11:54.931581   59686 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:11:54.932704   59686 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:11:54.933823   59686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:11:54.935464   59686 config.go:182] Loaded profile config "stopped-upgrade-389737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1004 04:11:54.936049   59686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:11:54.936104   59686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:11:54.950715   59686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37825
	I1004 04:11:54.951241   59686 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:11:54.951753   59686 main.go:141] libmachine: Using API Version  1
	I1004 04:11:54.951773   59686 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:11:54.952098   59686 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:11:54.952267   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	I1004 04:11:54.954009   59686 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1004 04:11:54.955156   59686 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:11:54.955496   59686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:11:54.955532   59686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:11:54.970319   59686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36407
	I1004 04:11:54.970875   59686 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:11:54.971339   59686 main.go:141] libmachine: Using API Version  1
	I1004 04:11:54.971363   59686 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:11:54.971697   59686 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:11:54.971904   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	I1004 04:11:55.008627   59686 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 04:11:55.010513   59686 start.go:297] selected driver: kvm2
	I1004 04:11:55.010532   59686 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-389737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-389
737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.179 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1004 04:11:55.010665   59686 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:11:55.011534   59686 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:11:55.011624   59686 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:11:55.027285   59686 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:11:55.027843   59686 cni.go:84] Creating CNI manager for ""
	I1004 04:11:55.027917   59686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:11:55.028031   59686 start.go:340] cluster config:
	{Name:stopped-upgrade-389737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-389737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.179 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1004 04:11:55.028187   59686 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:11:55.030023   59686 out.go:177] * Starting "stopped-upgrade-389737" primary control-plane node in "stopped-upgrade-389737" cluster
	I1004 04:11:55.031443   59686 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I1004 04:11:55.031485   59686 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I1004 04:11:55.031496   59686 cache.go:56] Caching tarball of preloaded images
	I1004 04:11:55.031575   59686 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:11:55.031586   59686 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I1004 04:11:55.031674   59686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/stopped-upgrade-389737/config.json ...
	I1004 04:11:55.031892   59686 start.go:360] acquireMachinesLock for stopped-upgrade-389737: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:11:55.031951   59686 start.go:364] duration metric: took 32.807µs to acquireMachinesLock for "stopped-upgrade-389737"
	I1004 04:11:55.031965   59686 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:11:55.031972   59686 fix.go:54] fixHost starting: 
	I1004 04:11:55.032373   59686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:11:55.032417   59686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:11:55.047497   59686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I1004 04:11:55.048031   59686 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:11:55.048736   59686 main.go:141] libmachine: Using API Version  1
	I1004 04:11:55.048782   59686 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:11:55.049134   59686 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:11:55.049372   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	I1004 04:11:55.049557   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetState
	I1004 04:11:55.051769   59686 fix.go:112] recreateIfNeeded on stopped-upgrade-389737: state=Stopped err=<nil>
	I1004 04:11:55.051842   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	W1004 04:11:55.052020   59686 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:11:55.054019   59686 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-389737" ...
	I1004 04:11:54.077902   59356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:11:54.311139   59356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:11:54.381749   59356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:11:54.452210   59356 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:11:54.452297   59356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:11:54.952940   59356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:11:55.453017   59356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:11:55.469251   59356 api_server.go:72] duration metric: took 1.017038288s to wait for apiserver process to appear ...
	I1004 04:11:55.469282   59356 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:11:55.469305   59356 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1004 04:11:57.854910   59356 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:11:57.854940   59356 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:11:57.854957   59356 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1004 04:11:57.965567   59356 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:11:57.965610   59356 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:11:57.969831   59356 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1004 04:11:57.977511   59356 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:11:57.977537   59356 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:11:58.470331   59356 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1004 04:11:58.474498   59356 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:11:58.474522   59356 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:11:58.970314   59356 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1004 04:11:58.978942   59356 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:11:58.978975   59356 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:11:59.469572   59356 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1004 04:11:59.474157   59356 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1004 04:11:59.481340   59356 api_server.go:141] control plane version: v1.31.1
	I1004 04:11:59.481369   59356 api_server.go:131] duration metric: took 4.012080581s to wait for apiserver health ...
	I1004 04:11:59.481377   59356 cni.go:84] Creating CNI manager for ""
	I1004 04:11:59.481383   59356 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:11:59.483711   59356 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:11:55.055399   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .Start
	I1004 04:11:55.055620   59686 main.go:141] libmachine: (stopped-upgrade-389737) Ensuring networks are active...
	I1004 04:11:55.056531   59686 main.go:141] libmachine: (stopped-upgrade-389737) Ensuring network default is active
	I1004 04:11:55.056978   59686 main.go:141] libmachine: (stopped-upgrade-389737) Ensuring network mk-stopped-upgrade-389737 is active
	I1004 04:11:55.057483   59686 main.go:141] libmachine: (stopped-upgrade-389737) Getting domain xml...
	I1004 04:11:55.058486   59686 main.go:141] libmachine: (stopped-upgrade-389737) Creating domain...
	I1004 04:11:56.303929   59686 main.go:141] libmachine: (stopped-upgrade-389737) Waiting to get IP...
	I1004 04:11:56.304976   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:56.305354   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:56.305426   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:56.305336   59722 retry.go:31] will retry after 238.462034ms: waiting for machine to come up
	I1004 04:11:56.545960   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:56.546468   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:56.546493   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:56.546419   59722 retry.go:31] will retry after 350.897629ms: waiting for machine to come up
	I1004 04:11:56.899102   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:56.899616   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:56.899645   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:56.899566   59722 retry.go:31] will retry after 447.479738ms: waiting for machine to come up
	I1004 04:11:57.348152   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:57.348661   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:57.348688   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:57.348615   59722 retry.go:31] will retry after 369.223931ms: waiting for machine to come up
	I1004 04:11:57.719177   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:57.719737   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:57.719769   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:57.719678   59722 retry.go:31] will retry after 602.656032ms: waiting for machine to come up
	I1004 04:11:58.323435   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:58.323898   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:58.323954   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:58.323881   59722 retry.go:31] will retry after 815.829727ms: waiting for machine to come up
	I1004 04:11:59.140901   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:59.141401   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:59.141427   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:59.141367   59722 retry.go:31] will retry after 796.999391ms: waiting for machine to come up
	I1004 04:11:59.485392   59356 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:11:59.496838   59356 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:11:59.517923   59356 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:11:59.518024   59356 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1004 04:11:59.518047   59356 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1004 04:11:59.526831   59356 system_pods.go:59] 6 kube-system pods found
	I1004 04:11:59.526861   59356 system_pods.go:61] "coredns-7c65d6cfc9-gttvn" [4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c] Running
	I1004 04:11:59.526870   59356 system_pods.go:61] "etcd-pause-353264" [834bfa30-5dd7-4d25-8331-2b9418027c01] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:11:59.526876   59356 system_pods.go:61] "kube-apiserver-pause-353264" [b0180bdc-2f96-4ea3-ada2-9e0f3251fbe5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:11:59.526884   59356 system_pods.go:61] "kube-controller-manager-pause-353264" [da561ad5-cbdc-4e0a-b1cf-a99ed2605d14] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:11:59.526889   59356 system_pods.go:61] "kube-proxy-tthhg" [5b374015-3f42-42d6-8357-e27efe1a939a] Running
	I1004 04:11:59.526894   59356 system_pods.go:61] "kube-scheduler-pause-353264" [9b44baad-ee16-42bf-aa51-0810167533ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:11:59.526909   59356 system_pods.go:74] duration metric: took 8.951419ms to wait for pod list to return data ...
	I1004 04:11:59.526917   59356 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:11:59.531204   59356 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:11:59.531236   59356 node_conditions.go:123] node cpu capacity is 2
	I1004 04:11:59.531253   59356 node_conditions.go:105] duration metric: took 4.331354ms to run NodePressure ...
	I1004 04:11:59.531280   59356 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:11:59.802165   59356 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:11:59.808719   59356 kubeadm.go:739] kubelet initialised
	I1004 04:11:59.808754   59356 kubeadm.go:740] duration metric: took 6.543454ms waiting for restarted kubelet to initialise ...
	I1004 04:11:59.808766   59356 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:11:59.815284   59356 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-gttvn" in "kube-system" namespace to be "Ready" ...
	I1004 04:11:59.823313   59356 pod_ready.go:93] pod "coredns-7c65d6cfc9-gttvn" in "kube-system" namespace has status "Ready":"True"
	I1004 04:11:59.823337   59356 pod_ready.go:82] duration metric: took 8.022694ms for pod "coredns-7c65d6cfc9-gttvn" in "kube-system" namespace to be "Ready" ...
	I1004 04:11:59.823347   59356 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:01.834122   59356 pod_ready.go:103] pod "etcd-pause-353264" in "kube-system" namespace has status "Ready":"False"
	I1004 04:12:01.181603   54385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:12:01.181837   54385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:11:59.940831   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:11:59.941467   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:11:59.941543   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:11:59.941477   59722 retry.go:31] will retry after 1.067074037s: waiting for machine to come up
	I1004 04:12:01.010980   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:01.011558   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:12:01.011578   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:12:01.011487   59722 retry.go:31] will retry after 1.371155898s: waiting for machine to come up
	I1004 04:12:02.385128   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:02.385719   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:12:02.385745   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:12:02.385667   59722 retry.go:31] will retry after 2.308141043s: waiting for machine to come up
	I1004 04:12:04.697015   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:04.697476   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:12:04.697504   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:12:04.697414   59722 retry.go:31] will retry after 2.468063752s: waiting for machine to come up
	I1004 04:12:04.330017   59356 pod_ready.go:103] pod "etcd-pause-353264" in "kube-system" namespace has status "Ready":"False"
	I1004 04:12:06.331992   59356 pod_ready.go:103] pod "etcd-pause-353264" in "kube-system" namespace has status "Ready":"False"
	I1004 04:12:07.168887   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:07.169442   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:12:07.169490   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:12:07.169408   59722 retry.go:31] will retry after 2.477656955s: waiting for machine to come up
	I1004 04:12:09.649007   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:09.649553   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | unable to find current IP address of domain stopped-upgrade-389737 in network mk-stopped-upgrade-389737
	I1004 04:12:09.649579   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | I1004 04:12:09.649500   59722 retry.go:31] will retry after 3.983855136s: waiting for machine to come up
	I1004 04:12:08.831026   59356 pod_ready.go:103] pod "etcd-pause-353264" in "kube-system" namespace has status "Ready":"False"
	I1004 04:12:10.330577   59356 pod_ready.go:93] pod "etcd-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:10.330602   59356 pod_ready.go:82] duration metric: took 10.50724891s for pod "etcd-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:10.330613   59356 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.336975   59356 pod_ready.go:93] pod "kube-apiserver-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:11.337003   59356 pod_ready.go:82] duration metric: took 1.006383121s for pod "kube-apiserver-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.337012   59356 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.342120   59356 pod_ready.go:93] pod "kube-controller-manager-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:11.342140   59356 pod_ready.go:82] duration metric: took 5.121943ms for pod "kube-controller-manager-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.342148   59356 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tthhg" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.348177   59356 pod_ready.go:93] pod "kube-proxy-tthhg" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:11.348203   59356 pod_ready.go:82] duration metric: took 6.04771ms for pod "kube-proxy-tthhg" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.348216   59356 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.353137   59356 pod_ready.go:93] pod "kube-scheduler-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:11.353156   59356 pod_ready.go:82] duration metric: took 4.933845ms for pod "kube-scheduler-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.353164   59356 pod_ready.go:39] duration metric: took 11.544386687s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:12:11.353180   59356 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:12:11.367452   59356 ops.go:34] apiserver oom_adj: -16
	I1004 04:12:11.367471   59356 kubeadm.go:597] duration metric: took 29.267075053s to restartPrimaryControlPlane
	I1004 04:12:11.367480   59356 kubeadm.go:394] duration metric: took 29.802462907s to StartCluster
	I1004 04:12:11.367495   59356 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:12:11.367564   59356 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:12:11.368491   59356 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:12:11.368712   59356 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:12:11.368794   59356 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:12:11.368936   59356 config.go:182] Loaded profile config "pause-353264": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:12:11.370428   59356 out.go:177] * Enabled addons: 
	I1004 04:12:11.370444   59356 out.go:177] * Verifying Kubernetes components...
	I1004 04:12:11.371969   59356 addons.go:510] duration metric: took 3.183426ms for enable addons: enabled=[]
	I1004 04:12:11.371983   59356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:12:11.519302   59356 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:12:11.537493   59356 node_ready.go:35] waiting up to 6m0s for node "pause-353264" to be "Ready" ...
	I1004 04:12:11.540518   59356 node_ready.go:49] node "pause-353264" has status "Ready":"True"
	I1004 04:12:11.540558   59356 node_ready.go:38] duration metric: took 3.006339ms for node "pause-353264" to be "Ready" ...
	I1004 04:12:11.540569   59356 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:12:11.545071   59356 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gttvn" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.928381   59356 pod_ready.go:93] pod "coredns-7c65d6cfc9-gttvn" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:11.928412   59356 pod_ready.go:82] duration metric: took 383.31432ms for pod "coredns-7c65d6cfc9-gttvn" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:11.928426   59356 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:12.328082   59356 pod_ready.go:93] pod "etcd-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:12.328110   59356 pod_ready.go:82] duration metric: took 399.674453ms for pod "etcd-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:12.328123   59356 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:12.733710   59356 pod_ready.go:93] pod "kube-apiserver-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:12.733742   59356 pod_ready.go:82] duration metric: took 405.61084ms for pod "kube-apiserver-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:12.733756   59356 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:13.128025   59356 pod_ready.go:93] pod "kube-controller-manager-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:13.128054   59356 pod_ready.go:82] duration metric: took 394.288848ms for pod "kube-controller-manager-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:13.128068   59356 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tthhg" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:13.528315   59356 pod_ready.go:93] pod "kube-proxy-tthhg" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:13.528339   59356 pod_ready.go:82] duration metric: took 400.264199ms for pod "kube-proxy-tthhg" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:13.528349   59356 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:13.930187   59356 pod_ready.go:93] pod "kube-scheduler-pause-353264" in "kube-system" namespace has status "Ready":"True"
	I1004 04:12:13.930215   59356 pod_ready.go:82] duration metric: took 401.85941ms for pod "kube-scheduler-pause-353264" in "kube-system" namespace to be "Ready" ...
	I1004 04:12:13.930224   59356 pod_ready.go:39] duration metric: took 2.389643751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:12:13.930238   59356 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:12:13.930288   59356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:12:13.946524   59356 api_server.go:72] duration metric: took 2.577784416s to wait for apiserver process to appear ...
	I1004 04:12:13.946559   59356 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:12:13.946584   59356 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1004 04:12:13.951589   59356 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1004 04:12:13.952990   59356 api_server.go:141] control plane version: v1.31.1
	I1004 04:12:13.953012   59356 api_server.go:131] duration metric: took 6.445157ms to wait for apiserver health ...
	I1004 04:12:13.953019   59356 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:12:14.130044   59356 system_pods.go:59] 6 kube-system pods found
	I1004 04:12:14.130074   59356 system_pods.go:61] "coredns-7c65d6cfc9-gttvn" [4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c] Running
	I1004 04:12:14.130079   59356 system_pods.go:61] "etcd-pause-353264" [834bfa30-5dd7-4d25-8331-2b9418027c01] Running
	I1004 04:12:14.130083   59356 system_pods.go:61] "kube-apiserver-pause-353264" [b0180bdc-2f96-4ea3-ada2-9e0f3251fbe5] Running
	I1004 04:12:14.130086   59356 system_pods.go:61] "kube-controller-manager-pause-353264" [da561ad5-cbdc-4e0a-b1cf-a99ed2605d14] Running
	I1004 04:12:14.130090   59356 system_pods.go:61] "kube-proxy-tthhg" [5b374015-3f42-42d6-8357-e27efe1a939a] Running
	I1004 04:12:14.130093   59356 system_pods.go:61] "kube-scheduler-pause-353264" [9b44baad-ee16-42bf-aa51-0810167533ff] Running
	I1004 04:12:14.130100   59356 system_pods.go:74] duration metric: took 177.074869ms to wait for pod list to return data ...
	I1004 04:12:14.130106   59356 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:12:14.327469   59356 default_sa.go:45] found service account: "default"
	I1004 04:12:14.327491   59356 default_sa.go:55] duration metric: took 197.380356ms for default service account to be created ...
	I1004 04:12:14.327508   59356 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:12:14.530370   59356 system_pods.go:86] 6 kube-system pods found
	I1004 04:12:14.530398   59356 system_pods.go:89] "coredns-7c65d6cfc9-gttvn" [4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c] Running
	I1004 04:12:14.530404   59356 system_pods.go:89] "etcd-pause-353264" [834bfa30-5dd7-4d25-8331-2b9418027c01] Running
	I1004 04:12:14.530408   59356 system_pods.go:89] "kube-apiserver-pause-353264" [b0180bdc-2f96-4ea3-ada2-9e0f3251fbe5] Running
	I1004 04:12:14.530413   59356 system_pods.go:89] "kube-controller-manager-pause-353264" [da561ad5-cbdc-4e0a-b1cf-a99ed2605d14] Running
	I1004 04:12:14.530417   59356 system_pods.go:89] "kube-proxy-tthhg" [5b374015-3f42-42d6-8357-e27efe1a939a] Running
	I1004 04:12:14.530421   59356 system_pods.go:89] "kube-scheduler-pause-353264" [9b44baad-ee16-42bf-aa51-0810167533ff] Running
	I1004 04:12:14.530427   59356 system_pods.go:126] duration metric: took 202.913767ms to wait for k8s-apps to be running ...
	I1004 04:12:14.530435   59356 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:12:14.530484   59356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:12:14.552933   59356 system_svc.go:56] duration metric: took 22.489797ms WaitForService to wait for kubelet
	I1004 04:12:14.552965   59356 kubeadm.go:582] duration metric: took 3.184228997s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:12:14.552987   59356 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:12:14.727510   59356 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:12:14.727541   59356 node_conditions.go:123] node cpu capacity is 2
	I1004 04:12:14.727552   59356 node_conditions.go:105] duration metric: took 174.56045ms to run NodePressure ...
	I1004 04:12:14.727563   59356 start.go:241] waiting for startup goroutines ...
	I1004 04:12:14.727569   59356 start.go:246] waiting for cluster config update ...
	I1004 04:12:14.727577   59356 start.go:255] writing updated cluster config ...
	I1004 04:12:14.727927   59356 ssh_runner.go:195] Run: rm -f paused
	I1004 04:12:14.785686   59356 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:12:14.787972   59356 out.go:177] * Done! kubectl is now configured to use "pause-353264" cluster and "default" namespace by default
	I1004 04:12:13.636943   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.637408   59686 main.go:141] libmachine: (stopped-upgrade-389737) Found IP for machine: 192.168.61.179
	I1004 04:12:13.637429   59686 main.go:141] libmachine: (stopped-upgrade-389737) Reserving static IP address...
	I1004 04:12:13.637456   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has current primary IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.637969   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "stopped-upgrade-389737", mac: "52:54:00:01:43:69", ip: "192.168.61.179"} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:13.637986   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | skip adding static IP to network mk-stopped-upgrade-389737 - found existing host DHCP lease matching {name: "stopped-upgrade-389737", mac: "52:54:00:01:43:69", ip: "192.168.61.179"}
	I1004 04:12:13.638023   59686 main.go:141] libmachine: (stopped-upgrade-389737) Reserved static IP address: 192.168.61.179
	I1004 04:12:13.638054   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | Getting to WaitForSSH function...
	I1004 04:12:13.638068   59686 main.go:141] libmachine: (stopped-upgrade-389737) Waiting for SSH to be available...
	I1004 04:12:13.640241   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.640628   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:13.640650   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.640791   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | Using SSH client type: external
	I1004 04:12:13.640832   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/stopped-upgrade-389737/id_rsa (-rw-------)
	I1004 04:12:13.640872   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/stopped-upgrade-389737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:12:13.640905   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | About to run SSH command:
	I1004 04:12:13.640915   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | exit 0
	I1004 04:12:13.728202   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | SSH cmd err, output: <nil>: 
	I1004 04:12:13.728616   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetConfigRaw
	I1004 04:12:13.729271   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetIP
	I1004 04:12:13.732411   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.732917   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:13.732941   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.733247   59686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/stopped-upgrade-389737/config.json ...
	I1004 04:12:13.733533   59686 machine.go:93] provisionDockerMachine start ...
	I1004 04:12:13.733559   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	I1004 04:12:13.733753   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:13.736689   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.737070   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:13.737114   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.737225   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:13.737550   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:13.737739   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:13.737897   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:13.738087   59686 main.go:141] libmachine: Using SSH client type: native
	I1004 04:12:13.738293   59686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I1004 04:12:13.738312   59686 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:12:13.847940   59686 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:12:13.847968   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetMachineName
	I1004 04:12:13.848229   59686 buildroot.go:166] provisioning hostname "stopped-upgrade-389737"
	I1004 04:12:13.848268   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetMachineName
	I1004 04:12:13.848431   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:13.851138   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.851691   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:13.851733   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.851855   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:13.852036   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:13.852398   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:13.852573   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:13.852751   59686 main.go:141] libmachine: Using SSH client type: native
	I1004 04:12:13.852919   59686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I1004 04:12:13.852931   59686 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-389737 && echo "stopped-upgrade-389737" | sudo tee /etc/hostname
	I1004 04:12:13.972119   59686 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-389737
	
	I1004 04:12:13.972153   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:13.974971   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.975454   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:13.975479   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:13.975686   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:13.975895   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:13.976052   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:13.976197   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:13.976431   59686 main.go:141] libmachine: Using SSH client type: native
	I1004 04:12:13.976599   59686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I1004 04:12:13.976615   59686 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-389737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-389737/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-389737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:12:14.094892   59686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:12:14.094926   59686 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:12:14.094951   59686 buildroot.go:174] setting up certificates
	I1004 04:12:14.094962   59686 provision.go:84] configureAuth start
	I1004 04:12:14.094994   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetMachineName
	I1004 04:12:14.095288   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetIP
	I1004 04:12:14.098085   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.098480   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:14.098519   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.098696   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:14.100959   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.101251   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:14.101273   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.101398   59686 provision.go:143] copyHostCerts
	I1004 04:12:14.101464   59686 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:12:14.101476   59686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:12:14.101552   59686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:12:14.101668   59686 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:12:14.101679   59686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:12:14.101719   59686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:12:14.101803   59686 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:12:14.101812   59686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:12:14.101850   59686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:12:14.101937   59686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-389737 san=[127.0.0.1 192.168.61.179 localhost minikube stopped-upgrade-389737]
	I1004 04:12:14.459060   59686 provision.go:177] copyRemoteCerts
	I1004 04:12:14.459130   59686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:12:14.459154   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:14.462494   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.462899   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:14.462937   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.463105   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:14.463372   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:14.463530   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:14.463745   59686 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/stopped-upgrade-389737/id_rsa Username:docker}
	I1004 04:12:14.547345   59686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:12:14.569657   59686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1004 04:12:14.593247   59686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 04:12:14.614424   59686 provision.go:87] duration metric: took 519.451326ms to configureAuth
	I1004 04:12:14.614449   59686 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:12:14.614612   59686 config.go:182] Loaded profile config "stopped-upgrade-389737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1004 04:12:14.614680   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:14.617416   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.617925   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:14.617961   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.618196   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:14.618454   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:14.618711   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:14.618950   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:14.619179   59686 main.go:141] libmachine: Using SSH client type: native
	I1004 04:12:14.619366   59686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I1004 04:12:14.619382   59686 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:12:14.908775   59686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:12:14.908799   59686 machine.go:96] duration metric: took 1.175248738s to provisionDockerMachine
	I1004 04:12:14.908812   59686 start.go:293] postStartSetup for "stopped-upgrade-389737" (driver="kvm2")
	I1004 04:12:14.908826   59686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:12:14.908851   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	I1004 04:12:14.909196   59686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:12:14.909223   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:14.911579   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.912014   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:14.912042   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:14.912141   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:14.912313   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:14.912474   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:14.912613   59686 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/stopped-upgrade-389737/id_rsa Username:docker}
	I1004 04:12:14.998767   59686 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:12:15.002883   59686 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 04:12:15.002910   59686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:12:15.002984   59686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:12:15.003071   59686 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:12:15.003161   59686 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:12:15.012110   59686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:12:15.036443   59686 start.go:296] duration metric: took 127.617045ms for postStartSetup
	I1004 04:12:15.036515   59686 fix.go:56] duration metric: took 20.004516975s for fixHost
	I1004 04:12:15.036553   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:15.039438   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:15.039915   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:15.039942   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:15.040127   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:15.040334   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:15.040606   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:15.040787   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:15.040952   59686 main.go:141] libmachine: Using SSH client type: native
	I1004 04:12:15.041109   59686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I1004 04:12:15.041119   59686 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:12:15.152747   59686 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015135.109282451
	
	I1004 04:12:15.152767   59686 fix.go:216] guest clock: 1728015135.109282451
	I1004 04:12:15.152775   59686 fix.go:229] Guest: 2024-10-04 04:12:15.109282451 +0000 UTC Remote: 2024-10-04 04:12:15.036532463 +0000 UTC m=+20.148603235 (delta=72.749988ms)
	I1004 04:12:15.152825   59686 fix.go:200] guest clock delta is within tolerance: 72.749988ms
	I1004 04:12:15.152835   59686 start.go:83] releasing machines lock for "stopped-upgrade-389737", held for 20.120874134s
	I1004 04:12:15.152861   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	I1004 04:12:15.153157   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetIP
	I1004 04:12:15.156297   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:15.156713   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:15.156743   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:15.156941   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	I1004 04:12:15.157770   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	I1004 04:12:15.157986   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .DriverName
	I1004 04:12:15.158074   59686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:12:15.158122   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:15.158207   59686 ssh_runner.go:195] Run: cat /version.json
	I1004 04:12:15.158233   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHHostname
	I1004 04:12:15.161241   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:15.161846   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:15.161897   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:15.161921   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:15.162040   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:15.162461   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:15.162495   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:43:69", ip: ""} in network mk-stopped-upgrade-389737: {Iface:virbr1 ExpiryTime:2024-10-04 05:12:05 +0000 UTC Type:0 Mac:52:54:00:01:43:69 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:stopped-upgrade-389737 Clientid:01:52:54:00:01:43:69}
	I1004 04:12:15.162531   59686 main.go:141] libmachine: (stopped-upgrade-389737) DBG | domain stopped-upgrade-389737 has defined IP address 192.168.61.179 and MAC address 52:54:00:01:43:69 in network mk-stopped-upgrade-389737
	I1004 04:12:15.162739   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:15.162743   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHPort
	I1004 04:12:15.162950   59686 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/stopped-upgrade-389737/id_rsa Username:docker}
	I1004 04:12:15.162964   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHKeyPath
	I1004 04:12:15.163200   59686 main.go:141] libmachine: (stopped-upgrade-389737) Calling .GetSSHUsername
	I1004 04:12:15.163366   59686 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/stopped-upgrade-389737/id_rsa Username:docker}
	W1004 04:12:15.265788   59686 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1004 04:12:15.265855   59686 ssh_runner.go:195] Run: systemctl --version
	I1004 04:12:15.271224   59686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:12:15.411851   59686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:12:15.418755   59686 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:12:15.418825   59686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:12:15.434642   59686 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:12:15.434668   59686 start.go:495] detecting cgroup driver to use...
	I1004 04:12:15.434769   59686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:12:15.450883   59686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:12:15.464859   59686 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:12:15.464926   59686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:12:15.480491   59686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:12:15.493688   59686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:12:15.615425   59686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:12:15.740068   59686 docker.go:233] disabling docker service ...
	I1004 04:12:15.740147   59686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:12:15.756374   59686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:12:15.769965   59686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:12:15.884036   59686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:12:15.996447   59686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:12:16.010552   59686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:12:16.029442   59686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1004 04:12:16.029506   59686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:12:16.041761   59686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:12:16.041829   59686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:12:16.053386   59686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:12:16.063956   59686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:12:16.074612   59686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:12:16.086101   59686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:12:16.097373   59686 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:12:16.112969   59686 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:12:16.122099   59686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:12:16.130600   59686 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:12:16.130670   59686 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:12:16.145037   59686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:12:16.154752   59686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:12:16.260787   59686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:12:16.417672   59686 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:12:16.417736   59686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:12:16.422500   59686 start.go:563] Will wait 60s for crictl version
	I1004 04:12:16.422585   59686 ssh_runner.go:195] Run: which crictl
	I1004 04:12:16.426766   59686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:12:16.463250   59686 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I1004 04:12:16.463360   59686 ssh_runner.go:195] Run: crio --version
	I1004 04:12:16.508114   59686 ssh_runner.go:195] Run: crio --version
	I1004 04:12:16.554442   59686 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	
	
	==> CRI-O <==
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.692166332Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7ba8d99af7074e6ee7a539c38638d2b878cfc1e624e622e71a2cb26222aa91d3,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-gttvn,Uid:4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728015101563446412,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T04:11:05.057989943Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d56131f182017e3dab1fe7c6e6b952904badf8b486d3b2374fa2641261d8217a,Metadata:&PodSandboxMetadata{Name:kube-proxy-tthhg,Uid:5b374015-3f42-42d6-8357-e27efe1a939a,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1728015101313206315,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T04:11:04.886344298Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&PodSandboxMetadata{Name:etcd-pause-353264,Uid:82f143f5852094f019e251a252b8a0a5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728015101290789836,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
etcd.advertise-client-urls: https://192.168.39.41:2379,kubernetes.io/config.hash: 82f143f5852094f019e251a252b8a0a5,kubernetes.io/config.seen: 2024-10-04T04:10:59.993159755Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-353264,Uid:2ac1c284d50647f044dc4b553a259a6e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728015101275428547,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.41:8443,kubernetes.io/config.hash: 2ac1c284d50647f044dc4b553a259a6e,kubernetes.io/config.seen: 2024-10-04T04:10:59.993161203Z,kubernetes.io/config.source: file,},RuntimeHan
dler:,},&PodSandbox{Id:5df0266451547af38f201fe742368c81b995786205cfac73d19907087b499931,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-353264,Uid:ed681309c428fa41f72fe46aa98190a1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728015101269967315,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed681309c428fa41f72fe46aa98190a1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ed681309c428fa41f72fe46aa98190a1,kubernetes.io/config.seen: 2024-10-04T04:10:59.993158465Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-353264,Uid:49125844997c970c99bbbb0a5faf8cdd,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728015101264912516,Labels:map[string]strin
g{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 49125844997c970c99bbbb0a5faf8cdd,kubernetes.io/config.seen: 2024-10-04T04:10:59.993154291Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f76c8e108a6af8e4d582adf7aabe6b07bc2c05ac2af846f7efa2b9f0b302a090,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-gttvn,Uid:4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1728015065369191979,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/c
onfig.seen: 2024-10-04T04:11:05.057989943Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7440bcfcd066e374a6b056922273048c27bc180c0671eebde598a674df63252b,Metadata:&PodSandboxMetadata{Name:kube-proxy-tthhg,Uid:5b374015-3f42-42d6-8357-e27efe1a939a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1728015065194654562,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T04:11:04.886344298Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:857ce4225431f5a464d525007e5b848455dc4abd9def63613b93a869e212ca91,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-353264,Uid:ed681309c428fa41f72fe46aa98190a1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,Creat
edAt:1728015054170396170,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed681309c428fa41f72fe46aa98190a1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ed681309c428fa41f72fe46aa98190a1,kubernetes.io/config.seen: 2024-10-04T04:10:53.680095126Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=90febb48-7b40-4863-afd3-89b22c60fc31 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.693185117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb509056-aeda-4cea-98b4-b005666bb7ba name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.693261410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb509056-aeda-4cea-98b4-b005666bb7ba name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.694110703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:740199b9cf8f7d44945f1595c6bc95b9d4a5f506de620b60b669d7350ee8c288,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015114992781418,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fcbb4a0c9f819570f35be774064d1789878d712f59584bf47095c8a2b8b9c2,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015114987863816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c9e66da51164aa6a44f8e0dffdbb1958d7eeecbe69c9855b5c62e85b5ca7731,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015114976853648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca0d560be7008050b1fbb40a47846e30d12910c212d181e8a1107fe9038e8da,PodSandboxId:d56131f182017e3dab1fe7c6e6b952904badf8b486d3b2374fa2641261d8217a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015101873557801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c7e7e82971fb76354fb48da635c71754db693ee4946a616c23668e9da54086,PodSandboxId:7ba8d99af7074e6ee7a539c38638d2b878cfc1e624e622e71a2cb26222aa91d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015102500344006,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942a33b53c14884a730c0d9372ec5780215af6816181a9f78c41f6daceb8793c,PodSandboxId:5df0266451547af38f201fe742368c81b995786205cfac73d19907087b499931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015101752070190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d25766843c897790bce723482daabdb70fdb342b72df7f64ee2248db1c95117,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728015101643719572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880e75697720a668948a7e23ff0534f175e214479c40dc90fa96079410f83512,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015101521780303,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aff26b1ec7bb1ec30ebf45e6a1956367f821edea625ad7dcc9b2471118c1503,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728015101595742138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36279bf5603bc0e718742b055d454240e5aac0045b783d2c0bda216288633d35,PodSandboxId:f76c8e108a6af8e4d582adf7aabe6b07bc2c05ac2af846f7efa2b9f0b302a090,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728015065696616512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7934b01963c4be725cd65138a3481e7a73e7902fe8f0e73707284f44de4f47ef,PodSandboxId:7440bcfcd066e374a6b056922273048c27bc180c0671eebde598a674df63252b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728015065304955485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca00a9c6d44e15b6fd112c388a715d1ba7ec29eb6d9bf7442109db7344a9a29,PodSandboxId:857ce4225431f5a464d525007e5b848455dc4abd9def63613b93a869e212ca91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728015054457422007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb509056-aeda-4cea-98b4-b005666bb7ba name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.736813730Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a94c8301-3a14-4331-9b12-521785409646 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.736925676Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a94c8301-3a14-4331-9b12-521785409646 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.738939461Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3080797-c9d7-424c-816c-2719ce4c9dd9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.739634950Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015137739590814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3080797-c9d7-424c-816c-2719ce4c9dd9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.741859878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a10e9f6-ca94-4555-8dc8-5244bade0b22 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.741950387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a10e9f6-ca94-4555-8dc8-5244bade0b22 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.742324142Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:740199b9cf8f7d44945f1595c6bc95b9d4a5f506de620b60b669d7350ee8c288,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015114992781418,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fcbb4a0c9f819570f35be774064d1789878d712f59584bf47095c8a2b8b9c2,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015114987863816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c9e66da51164aa6a44f8e0dffdbb1958d7eeecbe69c9855b5c62e85b5ca7731,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015114976853648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca0d560be7008050b1fbb40a47846e30d12910c212d181e8a1107fe9038e8da,PodSandboxId:d56131f182017e3dab1fe7c6e6b952904badf8b486d3b2374fa2641261d8217a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015101873557801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c7e7e82971fb76354fb48da635c71754db693ee4946a616c23668e9da54086,PodSandboxId:7ba8d99af7074e6ee7a539c38638d2b878cfc1e624e622e71a2cb26222aa91d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015102500344006,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942a33b53c14884a730c0d9372ec5780215af6816181a9f78c41f6daceb8793c,PodSandboxId:5df0266451547af38f201fe742368c81b995786205cfac73d19907087b499931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015101752070190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d25766843c897790bce723482daabdb70fdb342b72df7f64ee2248db1c95117,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728015101643719572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880e75697720a668948a7e23ff0534f175e214479c40dc90fa96079410f83512,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015101521780303,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aff26b1ec7bb1ec30ebf45e6a1956367f821edea625ad7dcc9b2471118c1503,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728015101595742138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36279bf5603bc0e718742b055d454240e5aac0045b783d2c0bda216288633d35,PodSandboxId:f76c8e108a6af8e4d582adf7aabe6b07bc2c05ac2af846f7efa2b9f0b302a090,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728015065696616512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7934b01963c4be725cd65138a3481e7a73e7902fe8f0e73707284f44de4f47ef,PodSandboxId:7440bcfcd066e374a6b056922273048c27bc180c0671eebde598a674df63252b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728015065304955485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca00a9c6d44e15b6fd112c388a715d1ba7ec29eb6d9bf7442109db7344a9a29,PodSandboxId:857ce4225431f5a464d525007e5b848455dc4abd9def63613b93a869e212ca91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728015054457422007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a10e9f6-ca94-4555-8dc8-5244bade0b22 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.804037199Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5be93d8b-0c61-4219-b03f-a5c8f5fa2aeb name=/runtime.v1.RuntimeService/Version
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.804146709Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5be93d8b-0c61-4219-b03f-a5c8f5fa2aeb name=/runtime.v1.RuntimeService/Version
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.805969957Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d15b7937-4cd1-46c6-8b18-5a86dd5ed078 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.806647431Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015137806607826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d15b7937-4cd1-46c6-8b18-5a86dd5ed078 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.807777054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a19cbf6-cdc1-45f7-b17b-972bf7fa3987 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.807858912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a19cbf6-cdc1-45f7-b17b-972bf7fa3987 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.808244201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:740199b9cf8f7d44945f1595c6bc95b9d4a5f506de620b60b669d7350ee8c288,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015114992781418,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fcbb4a0c9f819570f35be774064d1789878d712f59584bf47095c8a2b8b9c2,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015114987863816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c9e66da51164aa6a44f8e0dffdbb1958d7eeecbe69c9855b5c62e85b5ca7731,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015114976853648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca0d560be7008050b1fbb40a47846e30d12910c212d181e8a1107fe9038e8da,PodSandboxId:d56131f182017e3dab1fe7c6e6b952904badf8b486d3b2374fa2641261d8217a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015101873557801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c7e7e82971fb76354fb48da635c71754db693ee4946a616c23668e9da54086,PodSandboxId:7ba8d99af7074e6ee7a539c38638d2b878cfc1e624e622e71a2cb26222aa91d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015102500344006,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942a33b53c14884a730c0d9372ec5780215af6816181a9f78c41f6daceb8793c,PodSandboxId:5df0266451547af38f201fe742368c81b995786205cfac73d19907087b499931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015101752070190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d25766843c897790bce723482daabdb70fdb342b72df7f64ee2248db1c95117,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728015101643719572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880e75697720a668948a7e23ff0534f175e214479c40dc90fa96079410f83512,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015101521780303,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aff26b1ec7bb1ec30ebf45e6a1956367f821edea625ad7dcc9b2471118c1503,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728015101595742138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36279bf5603bc0e718742b055d454240e5aac0045b783d2c0bda216288633d35,PodSandboxId:f76c8e108a6af8e4d582adf7aabe6b07bc2c05ac2af846f7efa2b9f0b302a090,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728015065696616512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7934b01963c4be725cd65138a3481e7a73e7902fe8f0e73707284f44de4f47ef,PodSandboxId:7440bcfcd066e374a6b056922273048c27bc180c0671eebde598a674df63252b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728015065304955485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca00a9c6d44e15b6fd112c388a715d1ba7ec29eb6d9bf7442109db7344a9a29,PodSandboxId:857ce4225431f5a464d525007e5b848455dc4abd9def63613b93a869e212ca91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728015054457422007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a19cbf6-cdc1-45f7-b17b-972bf7fa3987 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.862263891Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=179bd9c4-8eb1-429d-9cd6-d6cdcdde5009 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.862367639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=179bd9c4-8eb1-429d-9cd6-d6cdcdde5009 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.864809289Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1d8aee3c-3057-4dcf-ace3-50001b51faad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.865355579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015137865317610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d8aee3c-3057-4dcf-ace3-50001b51faad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.866088286Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ec6a936-d73c-4a81-bf9f-07a2398e1aa6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.866181546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ec6a936-d73c-4a81-bf9f-07a2398e1aa6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:12:17 pause-353264 crio[2077]: time="2024-10-04 04:12:17.866533651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:740199b9cf8f7d44945f1595c6bc95b9d4a5f506de620b60b669d7350ee8c288,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015114992781418,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fcbb4a0c9f819570f35be774064d1789878d712f59584bf47095c8a2b8b9c2,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015114987863816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c9e66da51164aa6a44f8e0dffdbb1958d7eeecbe69c9855b5c62e85b5ca7731,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015114976853648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca0d560be7008050b1fbb40a47846e30d12910c212d181e8a1107fe9038e8da,PodSandboxId:d56131f182017e3dab1fe7c6e6b952904badf8b486d3b2374fa2641261d8217a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015101873557801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c7e7e82971fb76354fb48da635c71754db693ee4946a616c23668e9da54086,PodSandboxId:7ba8d99af7074e6ee7a539c38638d2b878cfc1e624e622e71a2cb26222aa91d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015102500344006,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942a33b53c14884a730c0d9372ec5780215af6816181a9f78c41f6daceb8793c,PodSandboxId:5df0266451547af38f201fe742368c81b995786205cfac73d19907087b499931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015101752070190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d25766843c897790bce723482daabdb70fdb342b72df7f64ee2248db1c95117,PodSandboxId:e0d062061752e248c10c012e7ae67b87dd411c4e36e2a280bf9697af645f71bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728015101643719572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82f143f5852094f019e251a252b8a0a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:880e75697720a668948a7e23ff0534f175e214479c40dc90fa96079410f83512,PodSandboxId:e7b09bd196a35d8e062adb65335fc7202df9e4a554cfdd28485acc0dfdbcdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015101521780303,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac1c284d50647f044dc4b553a259a6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aff26b1ec7bb1ec30ebf45e6a1956367f821edea625ad7dcc9b2471118c1503,PodSandboxId:9f247ffbb3b9f2f545982f073f9866a114dd070e804961ec35e2237470449a00,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728015101595742138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49125844997c970c99bbbb0a5faf8cdd,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36279bf5603bc0e718742b055d454240e5aac0045b783d2c0bda216288633d35,PodSandboxId:f76c8e108a6af8e4d582adf7aabe6b07bc2c05ac2af846f7efa2b9f0b302a090,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728015065696616512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gttvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8c2f6d-324c-4baa-b6cb-dfdd579bf55c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7934b01963c4be725cd65138a3481e7a73e7902fe8f0e73707284f44de4f47ef,PodSandboxId:7440bcfcd066e374a6b056922273048c27bc180c0671eebde598a674df63252b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728015065304955485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tthhg,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5b374015-3f42-42d6-8357-e27efe1a939a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca00a9c6d44e15b6fd112c388a715d1ba7ec29eb6d9bf7442109db7344a9a29,PodSandboxId:857ce4225431f5a464d525007e5b848455dc4abd9def63613b93a869e212ca91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728015054457422007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-353264,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ed681309c428fa41f72fe46aa98190a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ec6a936-d73c-4a81-bf9f-07a2398e1aa6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	740199b9cf8f7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   22 seconds ago       Running             kube-controller-manager   2                   9f247ffbb3b9f       kube-controller-manager-pause-353264
	46fcbb4a0c9f8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   22 seconds ago       Running             etcd                      2                   e0d062061752e       etcd-pause-353264
	9c9e66da51164       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   22 seconds ago       Running             kube-apiserver            2                   e7b09bd196a35       kube-apiserver-pause-353264
	61c7e7e82971f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   35 seconds ago       Running             coredns                   1                   7ba8d99af7074       coredns-7c65d6cfc9-gttvn
	0ca0d560be700       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   36 seconds ago       Running             kube-proxy                1                   d56131f182017       kube-proxy-tthhg
	942a33b53c148       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   36 seconds ago       Running             kube-scheduler            1                   5df0266451547       kube-scheduler-pause-353264
	7d25766843c89       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   36 seconds ago       Exited              etcd                      1                   e0d062061752e       etcd-pause-353264
	9aff26b1ec7bb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   36 seconds ago       Exited              kube-controller-manager   1                   9f247ffbb3b9f       kube-controller-manager-pause-353264
	880e75697720a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   36 seconds ago       Exited              kube-apiserver            1                   e7b09bd196a35       kube-apiserver-pause-353264
	36279bf5603bc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   f76c8e108a6af       coredns-7c65d6cfc9-gttvn
	7934b01963c4b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                0                   7440bcfcd066e       kube-proxy-tthhg
	dca00a9c6d44e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Exited              kube-scheduler            0                   857ce4225431f       kube-scheduler-pause-353264
	
	
	==> coredns [36279bf5603bc0e718742b055d454240e5aac0045b783d2c0bda216288633d35] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59030 - 13730 "HINFO IN 5594241315810035337.6325755141832387909. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025692232s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1899728852]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 04:11:05.949) (total time: 21621ms):
	Trace[1899728852]: [21.621358383s] [21.621358383s] END
	[INFO] plugin/kubernetes: Trace[1772467420]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 04:11:05.950) (total time: 21620ms):
	Trace[1772467420]: [21.620474782s] [21.620474782s] END
	[INFO] plugin/kubernetes: Trace[1812294083]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 04:11:05.948) (total time: 21622ms):
	Trace[1812294083]: [21.622746131s] [21.622746131s] END
	
	
	==> coredns [61c7e7e82971fb76354fb48da635c71754db693ee4946a616c23668e9da54086] <==
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41253 - 4363 "HINFO IN 4737419583035021761.7604275016666083342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014492725s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[829427277]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 04:11:42.717) (total time: 10000ms):
	Trace[829427277]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (04:11:52.717)
	Trace[829427277]: [10.000924048s] [10.000924048s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[428170624]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 04:11:42.716) (total time: 10001ms):
	Trace[428170624]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (04:11:52.717)
	Trace[428170624]: [10.001292697s] [10.001292697s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1549162769]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 04:11:42.716) (total time: 10002ms):
	Trace[1549162769]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (04:11:52.718)
	Trace[1549162769]: [10.00262831s] [10.00262831s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-353264
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-353264
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=pause-353264
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T04_11_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 04:10:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-353264
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 04:12:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 04:11:58 +0000   Fri, 04 Oct 2024 04:10:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 04:11:58 +0000   Fri, 04 Oct 2024 04:10:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 04:11:58 +0000   Fri, 04 Oct 2024 04:10:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 04:11:58 +0000   Fri, 04 Oct 2024 04:11:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    pause-353264
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 963a32b70e7a47cbb46f88834fbc654b
	  System UUID:                963a32b7-0e7a-47cb-b46f-88834fbc654b
	  Boot ID:                    164b43a4-1648-4106-8485-8951b89d8fac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-gttvn                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     73s
	  kube-system                 etcd-pause-353264                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         78s
	  kube-system                 kube-apiserver-pause-353264             250m (12%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-controller-manager-pause-353264    200m (10%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-proxy-tthhg                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-pause-353264             100m (5%)     0 (0%)      0 (0%)           0 (0%)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 72s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  Starting                 79s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     78s                kubelet          Node pause-353264 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  78s                kubelet          Node pause-353264 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s                kubelet          Node pause-353264 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                78s                kubelet          Node pause-353264 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           75s                node-controller  Node pause-353264 event: Registered Node pause-353264 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-353264 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-353264 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-353264 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node pause-353264 event: Registered Node pause-353264 in Controller
	
	
	==> dmesg <==
	[ +10.384258] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.084087] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067692] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.223241] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.127671] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.316092] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +4.419986] systemd-fstab-generator[749]: Ignoring "noauto" option for root device
	[  +0.064653] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.590181] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +1.417171] kauditd_printk_skb: 87 callbacks suppressed
	[  +5.151674] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[Oct 4 04:11] systemd-fstab-generator[1341]: Ignoring "noauto" option for root device
	[  +1.103022] kauditd_printk_skb: 43 callbacks suppressed
	[ +28.465485] systemd-fstab-generator[2002]: Ignoring "noauto" option for root device
	[  +0.072416] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.074168] systemd-fstab-generator[2014]: Ignoring "noauto" option for root device
	[  +0.198460] systemd-fstab-generator[2028]: Ignoring "noauto" option for root device
	[  +0.143302] systemd-fstab-generator[2040]: Ignoring "noauto" option for root device
	[  +0.337004] systemd-fstab-generator[2069]: Ignoring "noauto" option for root device
	[  +6.133697] systemd-fstab-generator[2186]: Ignoring "noauto" option for root device
	[  +0.072910] kauditd_printk_skb: 100 callbacks suppressed
	[ +13.447622] systemd-fstab-generator[2929]: Ignoring "noauto" option for root device
	[  +0.086361] kauditd_printk_skb: 87 callbacks suppressed
	[Oct 4 04:12] kauditd_printk_skb: 33 callbacks suppressed
	[ +10.040317] systemd-fstab-generator[3246]: Ignoring "noauto" option for root device
	
	
	==> etcd [46fcbb4a0c9f819570f35be774064d1789878d712f59584bf47095c8a2b8b9c2] <==
	{"level":"info","ts":"2024-10-04T04:11:55.328115Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","added-peer-id":"903e0dada8362847","added-peer-peer-urls":["https://192.168.39.41:2380"]}
	{"level":"info","ts":"2024-10-04T04:11:55.328233Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:11:55.328276Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:11:55.325415Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:11:55.341333Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-04T04:11:55.341648Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"903e0dada8362847","initial-advertise-peer-urls":["https://192.168.39.41:2380"],"listen-peer-urls":["https://192.168.39.41:2380"],"advertise-client-urls":["https://192.168.39.41:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.41:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-04T04:11:55.341703Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-04T04:11:55.341820Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-10-04T04:11:55.341847Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-10-04T04:11:56.391283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-04T04:11:56.391344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-04T04:11:56.391375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 received MsgPreVoteResp from 903e0dada8362847 at term 2"}
	{"level":"info","ts":"2024-10-04T04:11:56.391388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became candidate at term 3"}
	{"level":"info","ts":"2024-10-04T04:11:56.391394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 received MsgVoteResp from 903e0dada8362847 at term 3"}
	{"level":"info","ts":"2024-10-04T04:11:56.391402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became leader at term 3"}
	{"level":"info","ts":"2024-10-04T04:11:56.391409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 903e0dada8362847 elected leader 903e0dada8362847 at term 3"}
	{"level":"info","ts":"2024-10-04T04:11:56.396406Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"903e0dada8362847","local-member-attributes":"{Name:pause-353264 ClientURLs:[https://192.168.39.41:2379]}","request-path":"/0/members/903e0dada8362847/attributes","cluster-id":"b5cacf25c2f2940e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T04:11:56.396423Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:11:56.396442Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:11:56.397120Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T04:11:56.397179Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T04:11:56.397820Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:11:56.397857Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:11:56.398717Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T04:11:56.398838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.41:2379"}
	
	
	==> etcd [7d25766843c897790bce723482daabdb70fdb342b72df7f64ee2248db1c95117] <==
	{"level":"warn","ts":"2024-10-04T04:11:42.150185Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-10-04T04:11:42.150416Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.41:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.39.41:2380","--initial-cluster=pause-353264=https://192.168.39.41:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.41:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.41:2380","--name=pause-353264","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-c
a-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-10-04T04:11:42.154615Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-10-04T04:11:42.155501Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-10-04T04:11:42.155547Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.41:2380"]}
	{"level":"info","ts":"2024-10-04T04:11:42.155610Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-04T04:11:42.157384Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.41:2379"]}
	{"level":"info","ts":"2024-10-04T04:11:42.157613Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-353264","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.41:2380"],"listen-peer-urls":["https://192.168.39.41:2380"],"advertise-client-urls":["https://192.168.39.41:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.41:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluste
r-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-10-04T04:11:42.268545Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"110.704744ms"}
	{"level":"info","ts":"2024-10-04T04:11:42.328969Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-10-04T04:11:42.389188Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","commit-index":392}
	{"level":"info","ts":"2024-10-04T04:11:42.389287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 switched to configuration voters=()"}
	{"level":"info","ts":"2024-10-04T04:11:42.389363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became follower at term 2"}
	{"level":"info","ts":"2024-10-04T04:11:42.389380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 903e0dada8362847 [peers: [], term: 2, commit: 392, applied: 0, lastindex: 392, lastterm: 2]"}
	{"level":"warn","ts":"2024-10-04T04:11:42.401246Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	
	
	==> kernel <==
	 04:12:18 up 1 min,  0 users,  load average: 1.30, 0.46, 0.16
	Linux pause-353264 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [880e75697720a668948a7e23ff0534f175e214479c40dc90fa96079410f83512] <==
	I1004 04:11:42.102754       1 options.go:228] external host was not specified, using 192.168.39.41
	I1004 04:11:42.140265       1 server.go:142] Version: v1.31.1
	I1004 04:11:42.140306       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1004 04:11:42.992153       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:42.992351       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1004 04:11:42.992418       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1004 04:11:42.999283       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1004 04:11:42.999373       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1004 04:11:42.999544       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1004 04:11:42.999763       1 instance.go:232] Using reconciler: lease
	W1004 04:11:43.000760       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:43.993582       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:43.993674       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:44.001401       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:45.389316       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:45.485705       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:45.889356       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:47.833949       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:48.017378       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:48.715033       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:52.122000       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:11:52.419948       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9c9e66da51164aa6a44f8e0dffdbb1958d7eeecbe69c9855b5c62e85b5ca7731] <==
	I1004 04:11:57.950758       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1004 04:11:57.951096       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1004 04:11:57.951173       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1004 04:11:57.951180       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1004 04:11:57.951277       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1004 04:11:57.951366       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1004 04:11:57.973635       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1004 04:11:57.973739       1 aggregator.go:171] initial CRD sync complete...
	I1004 04:11:57.973764       1 autoregister_controller.go:144] Starting autoregister controller
	I1004 04:11:57.973819       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 04:11:57.973849       1 cache.go:39] Caches are synced for autoregister controller
	I1004 04:11:57.993782       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1004 04:11:57.994133       1 shared_informer.go:320] Caches are synced for configmaps
	E1004 04:11:58.007234       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1004 04:11:58.015549       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1004 04:11:58.015656       1 policy_source.go:224] refreshing policies
	I1004 04:11:58.032873       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 04:11:58.798595       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 04:11:59.639383       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 04:11:59.657817       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 04:11:59.699885       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 04:11:59.740313       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 04:11:59.749593       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 04:12:01.355580       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 04:12:01.661834       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [740199b9cf8f7d44945f1595c6bc95b9d4a5f506de620b60b669d7350ee8c288] <==
	I1004 04:12:01.257837       1 shared_informer.go:320] Caches are synced for namespace
	I1004 04:12:01.262549       1 shared_informer.go:320] Caches are synced for service account
	I1004 04:12:01.266543       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1004 04:12:01.301964       1 shared_informer.go:320] Caches are synced for disruption
	I1004 04:12:01.302076       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1004 04:12:01.302256       1 shared_informer.go:320] Caches are synced for PVC protection
	I1004 04:12:01.302265       1 shared_informer.go:320] Caches are synced for GC
	I1004 04:12:01.302274       1 shared_informer.go:320] Caches are synced for deployment
	I1004 04:12:01.303331       1 shared_informer.go:320] Caches are synced for cronjob
	I1004 04:12:01.334129       1 shared_informer.go:320] Caches are synced for endpoint
	I1004 04:12:01.400078       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1004 04:12:01.409916       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1004 04:12:01.429060       1 shared_informer.go:320] Caches are synced for daemon sets
	I1004 04:12:01.458817       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 04:12:01.495698       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 04:12:01.501761       1 shared_informer.go:320] Caches are synced for stateful set
	I1004 04:12:01.565861       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="299.20903ms"
	I1004 04:12:01.565964       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="51.56µs"
	I1004 04:12:01.909447       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 04:12:01.921774       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 04:12:01.921831       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 04:12:07.138865       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="20.079359ms"
	I1004 04:12:07.138973       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="47.513µs"
	I1004 04:12:07.166867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="15.758775ms"
	I1004 04:12:07.167545       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="278.295µs"
	
	
	==> kube-controller-manager [9aff26b1ec7bb1ec30ebf45e6a1956367f821edea625ad7dcc9b2471118c1503] <==
	
	
	==> kube-proxy [0ca0d560be7008050b1fbb40a47846e30d12910c212d181e8a1107fe9038e8da] <==
	 >
	E1004 04:11:43.058722       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 04:11:53.850106       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-353264\": dial tcp 192.168.39.41:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.41:39838->192.168.39.41:8443: read: connection reset by peer"
	E1004 04:11:54.964970       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-353264\": dial tcp 192.168.39.41:8443: connect: connection refused"
	I1004 04:11:57.960393       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.41"]
	E1004 04:11:57.960672       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 04:11:58.006387       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 04:11:58.006518       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 04:11:58.006551       1 server_linux.go:169] "Using iptables Proxier"
	I1004 04:11:58.010828       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 04:11:58.011121       1 server.go:483] "Version info" version="v1.31.1"
	I1004 04:11:58.011150       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:11:58.013207       1 config.go:199] "Starting service config controller"
	I1004 04:11:58.013264       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 04:11:58.013293       1 config.go:105] "Starting endpoint slice config controller"
	I1004 04:11:58.013297       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 04:11:58.014419       1 config.go:328] "Starting node config controller"
	I1004 04:11:58.014509       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 04:11:58.114084       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 04:11:58.114202       1 shared_informer.go:320] Caches are synced for service config
	I1004 04:11:58.114844       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7934b01963c4be725cd65138a3481e7a73e7902fe8f0e73707284f44de4f47ef] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 04:11:05.670069       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 04:11:05.695878       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.41"]
	E1004 04:11:05.696135       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 04:11:05.750402       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 04:11:05.750651       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 04:11:05.750730       1 server_linux.go:169] "Using iptables Proxier"
	I1004 04:11:05.758677       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 04:11:05.759793       1 server.go:483] "Version info" version="v1.31.1"
	I1004 04:11:05.759827       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:11:05.763934       1 config.go:199] "Starting service config controller"
	I1004 04:11:05.764397       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 04:11:05.764681       1 config.go:105] "Starting endpoint slice config controller"
	I1004 04:11:05.764708       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 04:11:05.766768       1 config.go:328] "Starting node config controller"
	I1004 04:11:05.766794       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 04:11:05.865213       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 04:11:05.865320       1 shared_informer.go:320] Caches are synced for service config
	I1004 04:11:05.868420       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [942a33b53c14884a730c0d9372ec5780215af6816181a9f78c41f6daceb8793c] <==
	W1004 04:11:55.238598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.41:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E1004 04:11:55.238661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.41:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.41:8443: connect: connection refused" logger="UnhandledError"
	W1004 04:11:57.881949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 04:11:57.882016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.882109       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 04:11:57.882138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.882238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 04:11:57.882296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.882908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 04:11:57.882989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.883161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 04:11:57.883821       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.884157       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 04:11:57.885086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.886698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 04:11:57.887574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.887850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 04:11:57.889551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.888636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 04:11:57.889670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.888647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 04:11:57.889731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:11:57.888918       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 04:11:57.889783       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1004 04:12:00.754569       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [dca00a9c6d44e15b6fd112c388a715d1ba7ec29eb6d9bf7442109db7344a9a29] <==
	E1004 04:10:58.186347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.235774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 04:10:58.235825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.252424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 04:10:58.252532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.282552       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 04:10:58.282613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.283837       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 04:10:58.283900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.284106       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 04:10:58.284152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.376058       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 04:10:58.376113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.445078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 04:10:58.445151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.488750       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 04:10:58.488810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.568123       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 04:10:58.568182       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1004 04:10:58.609348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 04:10:58.609417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:10:58.683733       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 04:10:58.684258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1004 04:11:01.202541       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1004 04:11:27.571056       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 04 04:11:54 pause-353264 kubelet[2936]: E1004 04:11:54.656339    2936 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-353264?timeout=10s\": dial tcp 192.168.39.41:8443: connect: connection refused" interval="400ms"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: I1004 04:11:54.757364    2936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ac1c284d50647f044dc4b553a259a6e-ca-certs\") pod \"kube-apiserver-pause-353264\" (UID: \"2ac1c284d50647f044dc4b553a259a6e\") " pod="kube-system/kube-apiserver-pause-353264"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: I1004 04:11:54.757426    2936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ac1c284d50647f044dc4b553a259a6e-k8s-certs\") pod \"kube-apiserver-pause-353264\" (UID: \"2ac1c284d50647f044dc4b553a259a6e\") " pod="kube-system/kube-apiserver-pause-353264"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: I1004 04:11:54.757551    2936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ac1c284d50647f044dc4b553a259a6e-usr-share-ca-certificates\") pod \"kube-apiserver-pause-353264\" (UID: \"2ac1c284d50647f044dc4b553a259a6e\") " pod="kube-system/kube-apiserver-pause-353264"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: I1004 04:11:54.831683    2936 kubelet_node_status.go:72] "Attempting to register node" node="pause-353264"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: E1004 04:11:54.832668    2936 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.41:8443: connect: connection refused" node="pause-353264"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: I1004 04:11:54.952216    2936 scope.go:117] "RemoveContainer" containerID="7d25766843c897790bce723482daabdb70fdb342b72df7f64ee2248db1c95117"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: I1004 04:11:54.952876    2936 scope.go:117] "RemoveContainer" containerID="9aff26b1ec7bb1ec30ebf45e6a1956367f821edea625ad7dcc9b2471118c1503"
	Oct 04 04:11:54 pause-353264 kubelet[2936]: I1004 04:11:54.952966    2936 scope.go:117] "RemoveContainer" containerID="880e75697720a668948a7e23ff0534f175e214479c40dc90fa96079410f83512"
	Oct 04 04:11:55 pause-353264 kubelet[2936]: E1004 04:11:55.058415    2936 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-353264?timeout=10s\": dial tcp 192.168.39.41:8443: connect: connection refused" interval="800ms"
	Oct 04 04:11:55 pause-353264 kubelet[2936]: I1004 04:11:55.234849    2936 kubelet_node_status.go:72] "Attempting to register node" node="pause-353264"
	Oct 04 04:11:55 pause-353264 kubelet[2936]: E1004 04:11:55.235813    2936 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.41:8443: connect: connection refused" node="pause-353264"
	Oct 04 04:11:56 pause-353264 kubelet[2936]: I1004 04:11:56.038109    2936 kubelet_node_status.go:72] "Attempting to register node" node="pause-353264"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.046145    2936 kubelet_node_status.go:111] "Node was previously registered" node="pause-353264"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.046261    2936 kubelet_node_status.go:75] "Successfully registered node" node="pause-353264"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.046293    2936 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.047146    2936 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.428698    2936 apiserver.go:52] "Watching apiserver"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.451951    2936 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.499189    2936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b374015-3f42-42d6-8357-e27efe1a939a-xtables-lock\") pod \"kube-proxy-tthhg\" (UID: \"5b374015-3f42-42d6-8357-e27efe1a939a\") " pod="kube-system/kube-proxy-tthhg"
	Oct 04 04:11:58 pause-353264 kubelet[2936]: I1004 04:11:58.499347    2936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b374015-3f42-42d6-8357-e27efe1a939a-lib-modules\") pod \"kube-proxy-tthhg\" (UID: \"5b374015-3f42-42d6-8357-e27efe1a939a\") " pod="kube-system/kube-proxy-tthhg"
	Oct 04 04:12:04 pause-353264 kubelet[2936]: E1004 04:12:04.535605    2936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015124535088018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:12:04 pause-353264 kubelet[2936]: E1004 04:12:04.535657    2936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015124535088018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:12:14 pause-353264 kubelet[2936]: E1004 04:12:14.537264    2936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015134536879226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:12:14 pause-353264 kubelet[2936]: E1004 04:12:14.537289    2936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728015134536879226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-353264 -n pause-353264
helpers_test.go:261: (dbg) Run:  kubectl --context pause-353264 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (65.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (289.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-420062 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-420062 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m49.445782772s)

                                                
                                                
-- stdout --
	* [old-k8s-version-420062] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-420062" primary control-plane node in "old-k8s-version-420062" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 04:14:04.109580   61939 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:14:04.109905   61939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:14:04.109918   61939 out.go:358] Setting ErrFile to fd 2...
	I1004 04:14:04.109925   61939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:14:04.110203   61939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:14:04.110967   61939 out.go:352] Setting JSON to false
	I1004 04:14:04.112279   61939 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6989,"bootTime":1728008255,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:14:04.112361   61939 start.go:139] virtualization: kvm guest
	I1004 04:14:04.131897   61939 out.go:177] * [old-k8s-version-420062] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:14:04.133486   61939 notify.go:220] Checking for updates...
	I1004 04:14:04.133538   61939 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:14:04.210406   61939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:14:04.256083   61939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:14:04.278691   61939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:14:04.353154   61939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:14:04.354619   61939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:14:04.356639   61939 config.go:182] Loaded profile config "NoKubernetes-316059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1004 04:14:04.356796   61939 config.go:182] Loaded profile config "kubernetes-upgrade-326061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:14:04.356929   61939 config.go:182] Loaded profile config "running-upgrade-552490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1004 04:14:04.357041   61939 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:14:04.509043   61939 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 04:14:04.511095   61939 start.go:297] selected driver: kvm2
	I1004 04:14:04.511116   61939 start.go:901] validating driver "kvm2" against <nil>
	I1004 04:14:04.511133   61939 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:14:04.512270   61939 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:14:04.512397   61939 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:14:04.534810   61939 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:14:04.534889   61939 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 04:14:04.535223   61939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:14:04.535263   61939 cni.go:84] Creating CNI manager for ""
	I1004 04:14:04.535320   61939 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:14:04.535337   61939 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 04:14:04.535399   61939 start.go:340] cluster config:
	{Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:14:04.535535   61939 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:14:04.589995   61939 out.go:177] * Starting "old-k8s-version-420062" primary control-plane node in "old-k8s-version-420062" cluster
	I1004 04:14:04.592419   61939 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 04:14:04.592471   61939 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1004 04:14:04.592482   61939 cache.go:56] Caching tarball of preloaded images
	I1004 04:14:04.592601   61939 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:14:04.592616   61939 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1004 04:14:04.592741   61939 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/config.json ...
	I1004 04:14:04.592765   61939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/config.json: {Name:mk3c5c715752f46e233f3b6b43dc4649d2615d23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:14:04.592920   61939 start.go:360] acquireMachinesLock for old-k8s-version-420062: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:14:21.437231   61939 start.go:364] duration metric: took 16.844268763s to acquireMachinesLock for "old-k8s-version-420062"
	I1004 04:14:21.437299   61939 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:14:21.437433   61939 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 04:14:21.438984   61939 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 04:14:21.439169   61939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:14:21.439224   61939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:14:21.459276   61939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I1004 04:14:21.459773   61939 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:14:21.460405   61939 main.go:141] libmachine: Using API Version  1
	I1004 04:14:21.460430   61939 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:14:21.460898   61939 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:14:21.461086   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:14:21.461278   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:14:21.461432   61939 start.go:159] libmachine.API.Create for "old-k8s-version-420062" (driver="kvm2")
	I1004 04:14:21.461461   61939 client.go:168] LocalClient.Create starting
	I1004 04:14:21.461496   61939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 04:14:21.461550   61939 main.go:141] libmachine: Decoding PEM data...
	I1004 04:14:21.461573   61939 main.go:141] libmachine: Parsing certificate...
	I1004 04:14:21.461639   61939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 04:14:21.461663   61939 main.go:141] libmachine: Decoding PEM data...
	I1004 04:14:21.461680   61939 main.go:141] libmachine: Parsing certificate...
	I1004 04:14:21.461703   61939 main.go:141] libmachine: Running pre-create checks...
	I1004 04:14:21.461720   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .PreCreateCheck
	I1004 04:14:21.462133   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetConfigRaw
	I1004 04:14:21.462580   61939 main.go:141] libmachine: Creating machine...
	I1004 04:14:21.462596   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .Create
	I1004 04:14:21.462705   61939 main.go:141] libmachine: (old-k8s-version-420062) Creating KVM machine...
	I1004 04:14:21.464112   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found existing default KVM network
	I1004 04:14:21.465233   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:21.465040   62329 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:48:9f:5f} reservation:<nil>}
	I1004 04:14:21.466327   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:21.466196   62329 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123aa0}
	I1004 04:14:21.466351   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | created network xml: 
	I1004 04:14:21.466365   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | <network>
	I1004 04:14:21.466374   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG |   <name>mk-old-k8s-version-420062</name>
	I1004 04:14:21.466388   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG |   <dns enable='no'/>
	I1004 04:14:21.466397   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG |   
	I1004 04:14:21.466407   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1004 04:14:21.466418   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG |     <dhcp>
	I1004 04:14:21.466427   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1004 04:14:21.466445   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG |     </dhcp>
	I1004 04:14:21.466456   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG |   </ip>
	I1004 04:14:21.466463   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG |   
	I1004 04:14:21.466478   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | </network>
	I1004 04:14:21.466493   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | 
	I1004 04:14:21.472162   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | trying to create private KVM network mk-old-k8s-version-420062 192.168.50.0/24...
	I1004 04:14:21.548388   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | private KVM network mk-old-k8s-version-420062 192.168.50.0/24 created
	I1004 04:14:21.548422   61939 main.go:141] libmachine: (old-k8s-version-420062) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062 ...
	I1004 04:14:21.548443   61939 main.go:141] libmachine: (old-k8s-version-420062) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 04:14:21.548456   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:21.548358   62329 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:14:21.548546   61939 main.go:141] libmachine: (old-k8s-version-420062) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 04:14:21.795585   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:21.795432   62329 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa...
	I1004 04:14:22.067768   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:22.067623   62329 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/old-k8s-version-420062.rawdisk...
	I1004 04:14:22.067800   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | Writing magic tar header
	I1004 04:14:22.067851   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | Writing SSH key tar header
	I1004 04:14:22.067904   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:22.067737   62329 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062 ...
	I1004 04:14:22.067918   61939 main.go:141] libmachine: (old-k8s-version-420062) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062 (perms=drwx------)
	I1004 04:14:22.067930   61939 main.go:141] libmachine: (old-k8s-version-420062) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 04:14:22.067937   61939 main.go:141] libmachine: (old-k8s-version-420062) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 04:14:22.067948   61939 main.go:141] libmachine: (old-k8s-version-420062) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 04:14:22.067955   61939 main.go:141] libmachine: (old-k8s-version-420062) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 04:14:22.067968   61939 main.go:141] libmachine: (old-k8s-version-420062) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 04:14:22.067983   61939 main.go:141] libmachine: (old-k8s-version-420062) Creating domain...
	I1004 04:14:22.067999   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062
	I1004 04:14:22.068014   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 04:14:22.068023   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:14:22.068031   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 04:14:22.068040   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 04:14:22.068046   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | Checking permissions on dir: /home/jenkins
	I1004 04:14:22.068053   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | Checking permissions on dir: /home
	I1004 04:14:22.068064   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | Skipping /home - not owner
	I1004 04:14:22.069135   61939 main.go:141] libmachine: (old-k8s-version-420062) define libvirt domain using xml: 
	I1004 04:14:22.069154   61939 main.go:141] libmachine: (old-k8s-version-420062) <domain type='kvm'>
	I1004 04:14:22.069161   61939 main.go:141] libmachine: (old-k8s-version-420062)   <name>old-k8s-version-420062</name>
	I1004 04:14:22.069169   61939 main.go:141] libmachine: (old-k8s-version-420062)   <memory unit='MiB'>2200</memory>
	I1004 04:14:22.069175   61939 main.go:141] libmachine: (old-k8s-version-420062)   <vcpu>2</vcpu>
	I1004 04:14:22.069179   61939 main.go:141] libmachine: (old-k8s-version-420062)   <features>
	I1004 04:14:22.069184   61939 main.go:141] libmachine: (old-k8s-version-420062)     <acpi/>
	I1004 04:14:22.069192   61939 main.go:141] libmachine: (old-k8s-version-420062)     <apic/>
	I1004 04:14:22.069207   61939 main.go:141] libmachine: (old-k8s-version-420062)     <pae/>
	I1004 04:14:22.069219   61939 main.go:141] libmachine: (old-k8s-version-420062)     
	I1004 04:14:22.069228   61939 main.go:141] libmachine: (old-k8s-version-420062)   </features>
	I1004 04:14:22.069239   61939 main.go:141] libmachine: (old-k8s-version-420062)   <cpu mode='host-passthrough'>
	I1004 04:14:22.069246   61939 main.go:141] libmachine: (old-k8s-version-420062)   
	I1004 04:14:22.069259   61939 main.go:141] libmachine: (old-k8s-version-420062)   </cpu>
	I1004 04:14:22.069297   61939 main.go:141] libmachine: (old-k8s-version-420062)   <os>
	I1004 04:14:22.069320   61939 main.go:141] libmachine: (old-k8s-version-420062)     <type>hvm</type>
	I1004 04:14:22.069337   61939 main.go:141] libmachine: (old-k8s-version-420062)     <boot dev='cdrom'/>
	I1004 04:14:22.069351   61939 main.go:141] libmachine: (old-k8s-version-420062)     <boot dev='hd'/>
	I1004 04:14:22.069363   61939 main.go:141] libmachine: (old-k8s-version-420062)     <bootmenu enable='no'/>
	I1004 04:14:22.069373   61939 main.go:141] libmachine: (old-k8s-version-420062)   </os>
	I1004 04:14:22.069381   61939 main.go:141] libmachine: (old-k8s-version-420062)   <devices>
	I1004 04:14:22.069390   61939 main.go:141] libmachine: (old-k8s-version-420062)     <disk type='file' device='cdrom'>
	I1004 04:14:22.069405   61939 main.go:141] libmachine: (old-k8s-version-420062)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/boot2docker.iso'/>
	I1004 04:14:22.069415   61939 main.go:141] libmachine: (old-k8s-version-420062)       <target dev='hdc' bus='scsi'/>
	I1004 04:14:22.069423   61939 main.go:141] libmachine: (old-k8s-version-420062)       <readonly/>
	I1004 04:14:22.069436   61939 main.go:141] libmachine: (old-k8s-version-420062)     </disk>
	I1004 04:14:22.069448   61939 main.go:141] libmachine: (old-k8s-version-420062)     <disk type='file' device='disk'>
	I1004 04:14:22.069461   61939 main.go:141] libmachine: (old-k8s-version-420062)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 04:14:22.069478   61939 main.go:141] libmachine: (old-k8s-version-420062)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/old-k8s-version-420062.rawdisk'/>
	I1004 04:14:22.069489   61939 main.go:141] libmachine: (old-k8s-version-420062)       <target dev='hda' bus='virtio'/>
	I1004 04:14:22.069499   61939 main.go:141] libmachine: (old-k8s-version-420062)     </disk>
	I1004 04:14:22.069510   61939 main.go:141] libmachine: (old-k8s-version-420062)     <interface type='network'>
	I1004 04:14:22.069533   61939 main.go:141] libmachine: (old-k8s-version-420062)       <source network='mk-old-k8s-version-420062'/>
	I1004 04:14:22.069552   61939 main.go:141] libmachine: (old-k8s-version-420062)       <model type='virtio'/>
	I1004 04:14:22.069566   61939 main.go:141] libmachine: (old-k8s-version-420062)     </interface>
	I1004 04:14:22.069576   61939 main.go:141] libmachine: (old-k8s-version-420062)     <interface type='network'>
	I1004 04:14:22.069588   61939 main.go:141] libmachine: (old-k8s-version-420062)       <source network='default'/>
	I1004 04:14:22.069599   61939 main.go:141] libmachine: (old-k8s-version-420062)       <model type='virtio'/>
	I1004 04:14:22.069608   61939 main.go:141] libmachine: (old-k8s-version-420062)     </interface>
	I1004 04:14:22.069618   61939 main.go:141] libmachine: (old-k8s-version-420062)     <serial type='pty'>
	I1004 04:14:22.069627   61939 main.go:141] libmachine: (old-k8s-version-420062)       <target port='0'/>
	I1004 04:14:22.069641   61939 main.go:141] libmachine: (old-k8s-version-420062)     </serial>
	I1004 04:14:22.069653   61939 main.go:141] libmachine: (old-k8s-version-420062)     <console type='pty'>
	I1004 04:14:22.069664   61939 main.go:141] libmachine: (old-k8s-version-420062)       <target type='serial' port='0'/>
	I1004 04:14:22.069675   61939 main.go:141] libmachine: (old-k8s-version-420062)     </console>
	I1004 04:14:22.069685   61939 main.go:141] libmachine: (old-k8s-version-420062)     <rng model='virtio'>
	I1004 04:14:22.069694   61939 main.go:141] libmachine: (old-k8s-version-420062)       <backend model='random'>/dev/random</backend>
	I1004 04:14:22.069713   61939 main.go:141] libmachine: (old-k8s-version-420062)     </rng>
	I1004 04:14:22.069723   61939 main.go:141] libmachine: (old-k8s-version-420062)     
	I1004 04:14:22.069732   61939 main.go:141] libmachine: (old-k8s-version-420062)     
	I1004 04:14:22.069741   61939 main.go:141] libmachine: (old-k8s-version-420062)   </devices>
	I1004 04:14:22.069758   61939 main.go:141] libmachine: (old-k8s-version-420062) </domain>
	I1004 04:14:22.069768   61939 main.go:141] libmachine: (old-k8s-version-420062) 
	I1004 04:14:22.074092   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:db:30:b2 in network default
	I1004 04:14:22.074807   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:22.074834   61939 main.go:141] libmachine: (old-k8s-version-420062) Ensuring networks are active...
	I1004 04:14:22.075568   61939 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network default is active
	I1004 04:14:22.075971   61939 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network mk-old-k8s-version-420062 is active
	I1004 04:14:22.076540   61939 main.go:141] libmachine: (old-k8s-version-420062) Getting domain xml...
	I1004 04:14:22.077336   61939 main.go:141] libmachine: (old-k8s-version-420062) Creating domain...
	I1004 04:14:23.353669   61939 main.go:141] libmachine: (old-k8s-version-420062) Waiting to get IP...
	I1004 04:14:23.354479   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:23.354895   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:14:23.354944   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:23.354891   62329 retry.go:31] will retry after 307.245769ms: waiting for machine to come up
	I1004 04:14:23.663455   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:23.664234   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:14:23.664262   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:23.664156   62329 retry.go:31] will retry after 299.7914ms: waiting for machine to come up
	I1004 04:14:23.965693   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:23.966142   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:14:23.966170   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:23.966072   62329 retry.go:31] will retry after 460.711165ms: waiting for machine to come up
	I1004 04:14:24.428791   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:24.429266   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:14:24.429293   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:24.429242   62329 retry.go:31] will retry after 535.889802ms: waiting for machine to come up
	I1004 04:14:24.967719   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:24.968282   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:14:24.968312   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:24.968221   62329 retry.go:31] will retry after 628.499286ms: waiting for machine to come up
	I1004 04:14:25.598105   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:25.598667   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:14:25.598698   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:25.598620   62329 retry.go:31] will retry after 680.128978ms: waiting for machine to come up
	I1004 04:14:26.280605   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:26.281244   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:14:26.281270   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:26.281168   62329 retry.go:31] will retry after 907.994464ms: waiting for machine to come up
	I1004 04:14:27.190496   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:27.190952   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:14:27.190982   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:27.190900   62329 retry.go:31] will retry after 1.101183499s: waiting for machine to come up
	I1004 04:14:28.294320   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:28.295115   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:14:28.295147   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:28.295031   62329 retry.go:31] will retry after 1.684693361s: waiting for machine to come up
	I1004 04:14:29.981912   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:29.982508   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:14:29.982533   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:29.982453   62329 retry.go:31] will retry after 2.219180802s: waiting for machine to come up
	I1004 04:14:32.203243   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:32.203819   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:14:32.203844   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:32.203768   62329 retry.go:31] will retry after 1.764058271s: waiting for machine to come up
	I1004 04:14:33.970808   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:33.971246   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:14:33.971277   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:33.971175   62329 retry.go:31] will retry after 2.470160047s: waiting for machine to come up
	I1004 04:14:36.444715   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:36.445313   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:14:36.445333   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:36.445233   62329 retry.go:31] will retry after 3.835440512s: waiting for machine to come up
	I1004 04:14:40.284921   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:40.285322   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:14:40.285355   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:14:40.285278   62329 retry.go:31] will retry after 4.712276786s: waiting for machine to come up
	I1004 04:14:45.001745   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.002268   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has current primary IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.002294   61939 main.go:141] libmachine: (old-k8s-version-420062) Found IP for machine: 192.168.50.146
	I1004 04:14:45.002318   61939 main.go:141] libmachine: (old-k8s-version-420062) Reserving static IP address...
	I1004 04:14:45.002717   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"} in network mk-old-k8s-version-420062
	I1004 04:14:45.090022   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | Getting to WaitForSSH function...
	I1004 04:14:45.090056   61939 main.go:141] libmachine: (old-k8s-version-420062) Reserved static IP address: 192.168.50.146
	I1004 04:14:45.090069   61939 main.go:141] libmachine: (old-k8s-version-420062) Waiting for SSH to be available...
	I1004 04:14:45.092899   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.093251   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:45.093280   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.093402   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH client type: external
	I1004 04:14:45.093424   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa (-rw-------)
	I1004 04:14:45.093451   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:14:45.093478   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | About to run SSH command:
	I1004 04:14:45.093508   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | exit 0
	I1004 04:14:45.216349   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | SSH cmd err, output: <nil>: 
	I1004 04:14:45.216756   61939 main.go:141] libmachine: (old-k8s-version-420062) KVM machine creation complete!
	I1004 04:14:45.217034   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetConfigRaw
	I1004 04:14:45.217615   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:14:45.217836   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:14:45.218017   61939 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 04:14:45.218033   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetState
	I1004 04:14:45.219312   61939 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 04:14:45.219326   61939 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 04:14:45.219330   61939 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 04:14:45.219336   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:14:45.221366   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.221697   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:45.221755   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.221818   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:14:45.221984   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:45.222147   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:45.222248   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:14:45.222379   61939 main.go:141] libmachine: Using SSH client type: native
	I1004 04:14:45.222578   61939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:14:45.222589   61939 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 04:14:45.323320   61939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:14:45.323343   61939 main.go:141] libmachine: Detecting the provisioner...
	I1004 04:14:45.323352   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:14:45.326136   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.326512   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:45.326545   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.326663   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:14:45.326843   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:45.327002   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:45.327183   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:14:45.327349   61939 main.go:141] libmachine: Using SSH client type: native
	I1004 04:14:45.327511   61939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:14:45.327523   61939 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 04:14:45.428882   61939 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 04:14:45.428966   61939 main.go:141] libmachine: found compatible host: buildroot
	I1004 04:14:45.428973   61939 main.go:141] libmachine: Provisioning with buildroot...
	I1004 04:14:45.428980   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:14:45.429205   61939 buildroot.go:166] provisioning hostname "old-k8s-version-420062"
	I1004 04:14:45.429227   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:14:45.429456   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:14:45.432323   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.432711   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:45.432752   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.432928   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:14:45.433104   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:45.433287   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:45.433457   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:14:45.433661   61939 main.go:141] libmachine: Using SSH client type: native
	I1004 04:14:45.433840   61939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:14:45.433853   61939 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-420062 && echo "old-k8s-version-420062" | sudo tee /etc/hostname
	I1004 04:14:45.550782   61939 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-420062
	
	I1004 04:14:45.550834   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:14:45.553678   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.554028   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:45.554062   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.554203   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:14:45.554428   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:45.554590   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:45.554730   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:14:45.554865   61939 main.go:141] libmachine: Using SSH client type: native
	I1004 04:14:45.555094   61939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:14:45.555121   61939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-420062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-420062/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-420062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:14:45.671541   61939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:14:45.671581   61939 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:14:45.671616   61939 buildroot.go:174] setting up certificates
	I1004 04:14:45.671624   61939 provision.go:84] configureAuth start
	I1004 04:14:45.671634   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:14:45.671922   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:14:45.674763   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.675223   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:45.675247   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.675462   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:14:45.677602   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.677898   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:45.677947   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.678037   61939 provision.go:143] copyHostCerts
	I1004 04:14:45.678115   61939 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:14:45.678129   61939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:14:45.678182   61939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:14:45.678273   61939 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:14:45.678282   61939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:14:45.678315   61939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:14:45.678379   61939 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:14:45.678385   61939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:14:45.678401   61939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:14:45.678478   61939 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-420062 san=[127.0.0.1 192.168.50.146 localhost minikube old-k8s-version-420062]
	I1004 04:14:45.799447   61939 provision.go:177] copyRemoteCerts
	I1004 04:14:45.799514   61939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:14:45.799543   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:14:45.802966   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.803365   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:45.803396   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.803622   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:14:45.803831   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:45.803995   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:14:45.804121   61939 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:14:45.886651   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:14:45.913512   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1004 04:14:45.942567   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:14:45.971072   61939 provision.go:87] duration metric: took 299.436483ms to configureAuth
	I1004 04:14:45.971111   61939 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:14:45.971310   61939 config.go:182] Loaded profile config "old-k8s-version-420062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1004 04:14:45.971396   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:14:45.974589   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.975004   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:45.975038   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:45.975217   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:14:45.975422   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:45.975571   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:45.975725   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:14:45.975896   61939 main.go:141] libmachine: Using SSH client type: native
	I1004 04:14:45.976076   61939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:14:45.976096   61939 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:14:46.223357   61939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:14:46.223389   61939 main.go:141] libmachine: Checking connection to Docker...
	I1004 04:14:46.223401   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetURL
	I1004 04:14:46.224843   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using libvirt version 6000000
	I1004 04:14:46.227116   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:46.227524   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:46.227547   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:46.227720   61939 main.go:141] libmachine: Docker is up and running!
	I1004 04:14:46.227734   61939 main.go:141] libmachine: Reticulating splines...
	I1004 04:14:46.227740   61939 client.go:171] duration metric: took 24.766269909s to LocalClient.Create
	I1004 04:14:46.227763   61939 start.go:167] duration metric: took 24.766331784s to libmachine.API.Create "old-k8s-version-420062"
	I1004 04:14:46.227776   61939 start.go:293] postStartSetup for "old-k8s-version-420062" (driver="kvm2")
	I1004 04:14:46.227809   61939 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:14:46.227833   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:14:46.228103   61939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:14:46.228129   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:14:46.230245   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:46.230586   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:46.230615   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:46.230816   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:14:46.231050   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:46.231232   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:14:46.231391   61939 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:14:46.311006   61939 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:14:46.315907   61939 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:14:46.315936   61939 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:14:46.316021   61939 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:14:46.316114   61939 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:14:46.316229   61939 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:14:46.326613   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:14:46.353793   61939 start.go:296] duration metric: took 125.984344ms for postStartSetup
	I1004 04:14:46.353839   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetConfigRaw
	I1004 04:14:46.354446   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:14:46.356858   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:46.357190   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:46.357221   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:46.357534   61939 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/config.json ...
	I1004 04:14:46.357749   61939 start.go:128] duration metric: took 24.920306331s to createHost
	I1004 04:14:46.357772   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:14:46.360085   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:46.360452   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:46.360479   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:46.360619   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:14:46.360797   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:46.360909   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:46.361054   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:14:46.361173   61939 main.go:141] libmachine: Using SSH client type: native
	I1004 04:14:46.361358   61939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:14:46.361368   61939 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:14:46.464958   61939 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015286.433530704
	
	I1004 04:14:46.464985   61939 fix.go:216] guest clock: 1728015286.433530704
	I1004 04:14:46.464994   61939 fix.go:229] Guest: 2024-10-04 04:14:46.433530704 +0000 UTC Remote: 2024-10-04 04:14:46.357761467 +0000 UTC m=+42.302029541 (delta=75.769237ms)
	I1004 04:14:46.465021   61939 fix.go:200] guest clock delta is within tolerance: 75.769237ms
	I1004 04:14:46.465026   61939 start.go:83] releasing machines lock for "old-k8s-version-420062", held for 25.027767664s
	I1004 04:14:46.465051   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:14:46.465277   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:14:46.468490   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:46.468912   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:46.468951   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:46.469133   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:14:46.469629   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:14:46.469841   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:14:46.469967   61939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:14:46.470009   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:14:46.470164   61939 ssh_runner.go:195] Run: cat /version.json
	I1004 04:14:46.470194   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:14:46.472901   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:46.473206   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:46.473380   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:46.473419   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:46.473518   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:14:46.473600   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:46.473635   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:46.473699   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:46.473802   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:14:46.473876   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:14:46.473954   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:14:46.474031   61939 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:14:46.474069   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:14:46.474177   61939 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:14:46.578484   61939 ssh_runner.go:195] Run: systemctl --version
	I1004 04:14:46.585627   61939 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:14:46.750564   61939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:14:46.757910   61939 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:14:46.757998   61939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:14:46.778948   61939 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:14:46.778971   61939 start.go:495] detecting cgroup driver to use...
	I1004 04:14:46.779028   61939 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:14:46.796023   61939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:14:46.812071   61939 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:14:46.812145   61939 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:14:46.827720   61939 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:14:46.844524   61939 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:14:46.970321   61939 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:14:47.127348   61939 docker.go:233] disabling docker service ...
	I1004 04:14:47.127418   61939 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:14:47.142904   61939 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:14:47.156657   61939 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:14:47.300382   61939 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:14:47.446015   61939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:14:47.461877   61939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:14:47.482498   61939 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1004 04:14:47.482562   61939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:14:47.494599   61939 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:14:47.494660   61939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:14:47.506757   61939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:14:47.518979   61939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:14:47.532053   61939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:14:47.544430   61939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:14:47.554881   61939 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:14:47.554944   61939 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:14:47.570024   61939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:14:47.580916   61939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:14:47.699943   61939 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:14:47.803719   61939 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:14:47.803835   61939 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:14:47.808962   61939 start.go:563] Will wait 60s for crictl version
	I1004 04:14:47.809034   61939 ssh_runner.go:195] Run: which crictl
	I1004 04:14:47.813548   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:14:47.855393   61939 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:14:47.855482   61939 ssh_runner.go:195] Run: crio --version
	I1004 04:14:47.891748   61939 ssh_runner.go:195] Run: crio --version
	I1004 04:14:47.926239   61939 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1004 04:14:47.927859   61939 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:14:47.930958   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:47.931347   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:14:36 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:14:47.931381   61939 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:14:47.931626   61939 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1004 04:14:47.936378   61939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:14:47.950103   61939 kubeadm.go:883] updating cluster {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:14:47.950216   61939 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 04:14:47.950274   61939 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:14:47.984907   61939 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:14:47.984983   61939 ssh_runner.go:195] Run: which lz4
	I1004 04:14:47.989186   61939 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:14:47.993694   61939 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:14:47.993732   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1004 04:14:49.850475   61939 crio.go:462] duration metric: took 1.861317812s to copy over tarball
	I1004 04:14:49.850556   61939 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:14:52.930276   61939 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.079685617s)
	I1004 04:14:52.930315   61939 crio.go:469] duration metric: took 3.079810525s to extract the tarball
	I1004 04:14:52.930325   61939 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:14:52.972912   61939 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:14:53.026712   61939 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:14:53.026740   61939 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:14:53.026818   61939 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:14:53.026861   61939 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:14:53.026878   61939 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:14:53.026887   61939 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1004 04:14:53.026880   61939 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1004 04:14:53.026845   61939 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:14:53.026862   61939 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:14:53.026834   61939 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:14:53.029004   61939 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:14:53.029012   61939 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:14:53.029026   61939 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1004 04:14:53.029039   61939 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:14:53.029004   61939 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:14:53.029108   61939 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1004 04:14:53.029135   61939 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:14:53.029333   61939 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:14:53.214936   61939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1004 04:14:53.249335   61939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:14:53.276869   61939 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1004 04:14:53.276909   61939 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1004 04:14:53.276968   61939 ssh_runner.go:195] Run: which crictl
	I1004 04:14:53.312563   61939 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1004 04:14:53.312611   61939 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:14:53.312661   61939 ssh_runner.go:195] Run: which crictl
	I1004 04:14:53.312769   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:14:53.318154   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:14:53.358252   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:14:53.368678   61939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:14:53.369264   61939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:14:53.381615   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:14:53.386495   61939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1004 04:14:53.401883   61939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:14:53.452021   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:14:53.458910   61939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1004 04:14:53.527867   61939 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1004 04:14:53.527908   61939 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:14:53.527958   61939 ssh_runner.go:195] Run: which crictl
	I1004 04:14:53.562379   61939 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1004 04:14:53.562427   61939 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:14:53.562482   61939 ssh_runner.go:195] Run: which crictl
	I1004 04:14:53.562624   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:14:53.613934   61939 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1004 04:14:53.613980   61939 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1004 04:14:53.614025   61939 ssh_runner.go:195] Run: which crictl
	I1004 04:14:53.614698   61939 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1004 04:14:53.614737   61939 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:14:53.614757   61939 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1004 04:14:53.614790   61939 ssh_runner.go:195] Run: which crictl
	I1004 04:14:53.614844   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:14:53.615946   61939 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1004 04:14:53.615996   61939 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:14:53.616034   61939 ssh_runner.go:195] Run: which crictl
	I1004 04:14:53.665373   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:14:53.665538   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:14:53.665720   61939 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1004 04:14:53.681920   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:14:53.681967   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:14:53.682034   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:14:53.810124   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:14:53.810188   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:14:53.810331   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:14:53.810448   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:14:53.810553   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:14:53.934390   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:14:53.934440   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:14:53.934470   61939 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1004 04:14:53.935774   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:14:53.936753   61939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:14:54.013037   61939 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1004 04:14:54.013170   61939 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1004 04:14:54.029596   61939 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1004 04:14:54.038200   61939 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1004 04:14:54.298346   61939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:14:54.449723   61939 cache_images.go:92] duration metric: took 1.422964516s to LoadCachedImages
	W1004 04:14:54.449817   61939 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1004 04:14:54.449834   61939 kubeadm.go:934] updating node { 192.168.50.146 8443 v1.20.0 crio true true} ...
	I1004 04:14:54.449976   61939 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-420062 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:14:54.450062   61939 ssh_runner.go:195] Run: crio config
	I1004 04:14:54.505994   61939 cni.go:84] Creating CNI manager for ""
	I1004 04:14:54.506020   61939 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:14:54.506032   61939 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:14:54.506058   61939 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.146 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-420062 NodeName:old-k8s-version-420062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1004 04:14:54.506235   61939 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-420062"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:14:54.506305   61939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1004 04:14:54.518757   61939 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:14:54.518831   61939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:14:54.529944   61939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1004 04:14:54.551620   61939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:14:54.573425   61939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1004 04:14:54.595527   61939 ssh_runner.go:195] Run: grep 192.168.50.146	control-plane.minikube.internal$ /etc/hosts
	I1004 04:14:54.600735   61939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:14:54.615800   61939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:14:54.759096   61939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:14:54.780312   61939 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062 for IP: 192.168.50.146
	I1004 04:14:54.780336   61939 certs.go:194] generating shared ca certs ...
	I1004 04:14:54.780357   61939 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:14:54.780546   61939 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:14:54.780597   61939 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:14:54.780609   61939 certs.go:256] generating profile certs ...
	I1004 04:14:54.780678   61939 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.key
	I1004 04:14:54.780706   61939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.crt with IP's: []
	I1004 04:14:55.066584   61939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.crt ...
	I1004 04:14:55.066632   61939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.crt: {Name:mk3f9579f23d8d448ca9030395adbfdf5221a55a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:14:55.066855   61939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.key ...
	I1004 04:14:55.066876   61939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.key: {Name:mka3d27f83a77a65fc7ea4b0c04d1169dff29669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:14:55.067002   61939 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key.c1f9ed6b
	I1004 04:14:55.067027   61939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.crt.c1f9ed6b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.146]
	I1004 04:14:55.184614   61939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.crt.c1f9ed6b ...
	I1004 04:14:55.184663   61939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.crt.c1f9ed6b: {Name:mk9b3de737233a461599451c52353e018f693518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:14:55.184827   61939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key.c1f9ed6b ...
	I1004 04:14:55.184841   61939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key.c1f9ed6b: {Name:mk09f5b03908ae0b05f9394a891da99203a9acfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:14:55.184931   61939 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.crt.c1f9ed6b -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.crt
	I1004 04:14:55.185031   61939 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key.c1f9ed6b -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key
	I1004 04:14:55.185087   61939 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key
	I1004 04:14:55.185103   61939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.crt with IP's: []
	I1004 04:14:55.291035   61939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.crt ...
	I1004 04:14:55.291073   61939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.crt: {Name:mk002ed5dde086da1d5c66aa683f7f8e915bba45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:14:55.291273   61939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key ...
	I1004 04:14:55.291293   61939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key: {Name:mk88b5e73895139fa339c5b3d3ed2d1d169d9e96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:14:55.291486   61939 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:14:55.291520   61939 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:14:55.291529   61939 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:14:55.291560   61939 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:14:55.291582   61939 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:14:55.291605   61939 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:14:55.291640   61939 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:14:55.292860   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:14:55.321872   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:14:55.355177   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:14:55.388463   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:14:55.421196   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1004 04:14:55.457822   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:14:55.493131   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:14:55.525647   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:14:55.555939   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:14:55.587747   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:14:55.621589   61939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:14:55.656803   61939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:14:55.679193   61939 ssh_runner.go:195] Run: openssl version
	I1004 04:14:55.686924   61939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:14:55.709097   61939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:14:55.714217   61939 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:14:55.714289   61939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:14:55.720700   61939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:14:55.732943   61939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:14:55.749680   61939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:14:55.756036   61939 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:14:55.756109   61939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:14:55.762383   61939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:14:55.774038   61939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:14:55.790073   61939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:14:55.796184   61939 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:14:55.796229   61939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:14:55.804282   61939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:14:55.818894   61939 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:14:55.823816   61939 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 04:14:55.823879   61939 kubeadm.go:392] StartCluster: {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:14:55.823983   61939 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:14:55.824029   61939 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:14:55.872000   61939 cri.go:89] found id: ""
	I1004 04:14:55.872067   61939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:14:55.883676   61939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:14:55.895124   61939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:14:55.909005   61939 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:14:55.909032   61939 kubeadm.go:157] found existing configuration files:
	
	I1004 04:14:55.909090   61939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:14:55.919414   61939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:14:55.919490   61939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:14:55.931046   61939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:14:55.941248   61939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:14:55.941327   61939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:14:55.955470   61939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:14:55.965379   61939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:14:55.965453   61939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:14:55.976022   61939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:14:55.987087   61939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:14:55.987177   61939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:14:55.998415   61939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:14:56.154153   61939 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:14:56.154390   61939 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:14:56.343162   61939 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:14:56.343278   61939 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:14:56.343399   61939 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:14:56.650457   61939 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:14:56.652579   61939 out.go:235]   - Generating certificates and keys ...
	I1004 04:14:56.652689   61939 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:14:56.652784   61939 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:14:56.864085   61939 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 04:14:57.058443   61939 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1004 04:14:57.295982   61939 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1004 04:14:57.621866   61939 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1004 04:14:57.839457   61939 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1004 04:14:57.839733   61939 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-420062] and IPs [192.168.50.146 127.0.0.1 ::1]
	I1004 04:14:57.925199   61939 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1004 04:14:57.925430   61939 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-420062] and IPs [192.168.50.146 127.0.0.1 ::1]
	I1004 04:14:58.283222   61939 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 04:14:58.401430   61939 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 04:14:58.574140   61939 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1004 04:14:58.574943   61939 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:14:58.979860   61939 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:14:59.052200   61939 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:14:59.282237   61939 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:14:59.414881   61939 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:14:59.439136   61939 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:14:59.439258   61939 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:14:59.439348   61939 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:14:59.599517   61939 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:14:59.601391   61939 out.go:235]   - Booting up control plane ...
	I1004 04:14:59.601534   61939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:14:59.610383   61939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:14:59.611412   61939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:14:59.612395   61939 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:14:59.617032   61939 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:15:39.606739   61939 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:15:39.607460   61939 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:15:39.607722   61939 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:15:44.607771   61939 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:15:44.608035   61939 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:15:54.607764   61939 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:15:54.607984   61939 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:16:14.608387   61939 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:16:14.608686   61939 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:16:54.609300   61939 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:16:54.609525   61939 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:16:54.609540   61939 kubeadm.go:310] 
	I1004 04:16:54.609593   61939 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:16:54.609632   61939 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:16:54.609639   61939 kubeadm.go:310] 
	I1004 04:16:54.609678   61939 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:16:54.609735   61939 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:16:54.609903   61939 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:16:54.609927   61939 kubeadm.go:310] 
	I1004 04:16:54.610107   61939 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:16:54.610166   61939 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:16:54.610221   61939 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:16:54.610237   61939 kubeadm.go:310] 
	I1004 04:16:54.610401   61939 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:16:54.610481   61939 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:16:54.610488   61939 kubeadm.go:310] 
	I1004 04:16:54.610612   61939 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:16:54.610749   61939 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:16:54.610856   61939 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:16:54.610959   61939 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:16:54.610969   61939 kubeadm.go:310] 
	I1004 04:16:54.611772   61939 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:16:54.611896   61939 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:16:54.611984   61939 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1004 04:16:54.612170   61939 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-420062] and IPs [192.168.50.146 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-420062] and IPs [192.168.50.146 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-420062] and IPs [192.168.50.146 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-420062] and IPs [192.168.50.146 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1004 04:16:54.612239   61939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:16:56.144957   61939 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.532688032s)
	I1004 04:16:56.145043   61939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:16:56.159854   61939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:16:56.170694   61939 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:16:56.170722   61939 kubeadm.go:157] found existing configuration files:
	
	I1004 04:16:56.170780   61939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:16:56.181043   61939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:16:56.181099   61939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:16:56.191698   61939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:16:56.202059   61939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:16:56.202119   61939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:16:56.212454   61939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:16:56.222265   61939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:16:56.222321   61939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:16:56.232726   61939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:16:56.242463   61939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:16:56.242525   61939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:16:56.252846   61939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:16:56.475664   61939 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:18:52.825264   61939 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:18:52.825369   61939 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1004 04:18:52.826905   61939 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:18:52.826955   61939 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:18:52.827034   61939 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:18:52.827148   61939 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:18:52.827304   61939 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:18:52.827402   61939 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:18:52.829312   61939 out.go:235]   - Generating certificates and keys ...
	I1004 04:18:52.829409   61939 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:18:52.829494   61939 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:18:52.829579   61939 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:18:52.829634   61939 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:18:52.829766   61939 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:18:52.829850   61939 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:18:52.829944   61939 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:18:52.829999   61939 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:18:52.830099   61939 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:18:52.830217   61939 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:18:52.830283   61939 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:18:52.830370   61939 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:18:52.830435   61939 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:18:52.830510   61939 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:18:52.830647   61939 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:18:52.830747   61939 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:18:52.830901   61939 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:18:52.830977   61939 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:18:52.831013   61939 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:18:52.831072   61939 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:18:52.832694   61939 out.go:235]   - Booting up control plane ...
	I1004 04:18:52.832794   61939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:18:52.832890   61939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:18:52.832978   61939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:18:52.833088   61939 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:18:52.833264   61939 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:18:52.833333   61939 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:18:52.833426   61939 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:18:52.833617   61939 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:18:52.833710   61939 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:18:52.833910   61939 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:18:52.833984   61939 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:18:52.834163   61939 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:18:52.834234   61939 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:18:52.834403   61939 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:18:52.834461   61939 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:18:52.834630   61939 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:18:52.834644   61939 kubeadm.go:310] 
	I1004 04:18:52.834703   61939 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:18:52.834763   61939 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:18:52.834772   61939 kubeadm.go:310] 
	I1004 04:18:52.834834   61939 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:18:52.834878   61939 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:18:52.835000   61939 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:18:52.835010   61939 kubeadm.go:310] 
	I1004 04:18:52.835116   61939 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:18:52.835172   61939 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:18:52.835213   61939 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:18:52.835226   61939 kubeadm.go:310] 
	I1004 04:18:52.835377   61939 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:18:52.835490   61939 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:18:52.835501   61939 kubeadm.go:310] 
	I1004 04:18:52.835618   61939 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:18:52.835730   61939 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:18:52.835818   61939 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:18:52.835882   61939 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:18:52.835927   61939 kubeadm.go:310] 
	I1004 04:18:52.835943   61939 kubeadm.go:394] duration metric: took 3m57.012069719s to StartCluster
	I1004 04:18:52.835981   61939 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:18:52.836032   61939 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:18:52.880511   61939 cri.go:89] found id: ""
	I1004 04:18:52.880543   61939 logs.go:282] 0 containers: []
	W1004 04:18:52.880552   61939 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:18:52.880559   61939 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:18:52.880620   61939 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:18:52.928773   61939 cri.go:89] found id: ""
	I1004 04:18:52.928802   61939 logs.go:282] 0 containers: []
	W1004 04:18:52.928809   61939 logs.go:284] No container was found matching "etcd"
	I1004 04:18:52.928815   61939 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:18:52.928862   61939 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:18:52.965319   61939 cri.go:89] found id: ""
	I1004 04:18:52.965345   61939 logs.go:282] 0 containers: []
	W1004 04:18:52.965352   61939 logs.go:284] No container was found matching "coredns"
	I1004 04:18:52.965358   61939 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:18:52.965410   61939 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:18:53.014519   61939 cri.go:89] found id: ""
	I1004 04:18:53.014542   61939 logs.go:282] 0 containers: []
	W1004 04:18:53.014550   61939 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:18:53.014556   61939 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:18:53.014604   61939 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:18:53.060120   61939 cri.go:89] found id: ""
	I1004 04:18:53.060145   61939 logs.go:282] 0 containers: []
	W1004 04:18:53.060155   61939 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:18:53.060162   61939 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:18:53.060234   61939 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:18:53.112853   61939 cri.go:89] found id: ""
	I1004 04:18:53.112887   61939 logs.go:282] 0 containers: []
	W1004 04:18:53.112894   61939 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:18:53.112900   61939 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:18:53.112951   61939 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:18:53.149334   61939 cri.go:89] found id: ""
	I1004 04:18:53.149361   61939 logs.go:282] 0 containers: []
	W1004 04:18:53.149374   61939 logs.go:284] No container was found matching "kindnet"
	I1004 04:18:53.149385   61939 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:18:53.149397   61939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:18:53.257230   61939 logs.go:123] Gathering logs for container status ...
	I1004 04:18:53.257273   61939 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:18:53.297657   61939 logs.go:123] Gathering logs for kubelet ...
	I1004 04:18:53.297693   61939 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:18:53.351439   61939 logs.go:123] Gathering logs for dmesg ...
	I1004 04:18:53.351487   61939 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:18:53.365961   61939 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:18:53.365995   61939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:18:53.484665   61939 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1004 04:18:53.484688   61939 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1004 04:18:53.484730   61939 out.go:270] * 
	* 
	W1004 04:18:53.484795   61939 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:18:53.484814   61939 out.go:270] * 
	* 
	W1004 04:18:53.485683   61939 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 04:18:53.488775   61939 out.go:201] 
	W1004 04:18:53.490339   61939 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:18:53.490382   61939 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1004 04:18:53.490404   61939 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1004 04:18:53.492982   61939 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-420062 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062: exit status 6 (226.407084ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:18:53.762681   66478 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-420062" does not appear in /home/jenkins/minikube-integration/19546-9647/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-420062" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (289.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (61.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-316059 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-316059 --driver=kvm2  --container-runtime=crio: exit status 80 (1m1.325837122s)

                                                
                                                
-- stdout --
	* [NoKubernetes-316059] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-316059
	* Restarting existing kvm2 VM for "NoKubernetes-316059" ...
	* Updating the running kvm2 "NoKubernetes-316059" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	* Failed to start kvm2 VM. Running "minikube delete -p NoKubernetes-316059" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-linux-amd64 start -p NoKubernetes-316059 --driver=kvm2  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-316059 -n NoKubernetes-316059
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-316059 -n NoKubernetes-316059: exit status 6 (240.810116ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:15:58.783221   63645 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-316059" does not appear in /home/jenkins/minikube-integration/19546-9647/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-316059" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (61.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-658545 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-658545 --alsologtostderr -v=3: exit status 82 (2m0.532998334s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-658545"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 04:16:06.974228   64046 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:16:06.974357   64046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:16:06.974369   64046 out.go:358] Setting ErrFile to fd 2...
	I1004 04:16:06.974376   64046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:16:06.974658   64046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:16:06.974945   64046 out.go:352] Setting JSON to false
	I1004 04:16:06.975039   64046 mustload.go:65] Loading cluster: no-preload-658545
	I1004 04:16:06.975579   64046 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:16:06.975676   64046 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/config.json ...
	I1004 04:16:06.975900   64046 mustload.go:65] Loading cluster: no-preload-658545
	I1004 04:16:06.976056   64046 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:16:06.976086   64046 stop.go:39] StopHost: no-preload-658545
	I1004 04:16:06.976657   64046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:16:06.976714   64046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:16:06.992063   64046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43431
	I1004 04:16:06.992567   64046 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:16:06.993233   64046 main.go:141] libmachine: Using API Version  1
	I1004 04:16:06.993267   64046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:16:06.993619   64046 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:16:06.995903   64046 out.go:177] * Stopping node "no-preload-658545"  ...
	I1004 04:16:06.997788   64046 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1004 04:16:06.997828   64046 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:16:06.998131   64046 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1004 04:16:06.998156   64046 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:16:07.001437   64046 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:16:07.001936   64046 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:16:07.001964   64046 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:16:07.002166   64046 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:16:07.002370   64046 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:16:07.002542   64046 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:16:07.002715   64046 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:16:07.118893   64046 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1004 04:16:07.180287   64046 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1004 04:16:07.237672   64046 main.go:141] libmachine: Stopping "no-preload-658545"...
	I1004 04:16:07.237714   64046 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:16:07.239432   64046 main.go:141] libmachine: (no-preload-658545) Calling .Stop
	I1004 04:16:07.243557   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 0/120
	I1004 04:16:08.245264   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 1/120
	I1004 04:16:09.246799   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 2/120
	I1004 04:16:10.248584   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 3/120
	I1004 04:16:11.250782   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 4/120
	I1004 04:16:12.252872   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 5/120
	I1004 04:16:13.254295   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 6/120
	I1004 04:16:14.255690   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 7/120
	I1004 04:16:15.257112   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 8/120
	I1004 04:16:16.258565   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 9/120
	I1004 04:16:17.260227   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 10/120
	I1004 04:16:18.261399   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 11/120
	I1004 04:16:19.262831   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 12/120
	I1004 04:16:20.264161   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 13/120
	I1004 04:16:21.265605   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 14/120
	I1004 04:16:22.268054   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 15/120
	I1004 04:16:23.269724   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 16/120
	I1004 04:16:24.271655   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 17/120
	I1004 04:16:25.273391   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 18/120
	I1004 04:16:26.275118   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 19/120
	I1004 04:16:27.277518   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 20/120
	I1004 04:16:28.279762   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 21/120
	I1004 04:16:29.281200   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 22/120
	I1004 04:16:30.283061   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 23/120
	I1004 04:16:31.285344   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 24/120
	I1004 04:16:32.287368   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 25/120
	I1004 04:16:33.288867   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 26/120
	I1004 04:16:34.290388   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 27/120
	I1004 04:16:35.291762   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 28/120
	I1004 04:16:36.293279   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 29/120
	I1004 04:16:37.295752   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 30/120
	I1004 04:16:38.297200   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 31/120
	I1004 04:16:39.298881   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 32/120
	I1004 04:16:40.300381   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 33/120
	I1004 04:16:41.301697   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 34/120
	I1004 04:16:42.304360   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 35/120
	I1004 04:16:43.305695   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 36/120
	I1004 04:16:44.307042   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 37/120
	I1004 04:16:45.308458   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 38/120
	I1004 04:16:46.310599   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 39/120
	I1004 04:16:47.312773   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 40/120
	I1004 04:16:48.314422   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 41/120
	I1004 04:16:49.316146   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 42/120
	I1004 04:16:50.317507   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 43/120
	I1004 04:16:51.319432   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 44/120
	I1004 04:16:52.321884   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 45/120
	I1004 04:16:53.323342   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 46/120
	I1004 04:16:54.325016   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 47/120
	I1004 04:16:55.326491   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 48/120
	I1004 04:16:56.328136   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 49/120
	I1004 04:16:57.330249   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 50/120
	I1004 04:16:58.331749   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 51/120
	I1004 04:16:59.333234   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 52/120
	I1004 04:17:00.335128   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 53/120
	I1004 04:17:01.336820   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 54/120
	I1004 04:17:02.338847   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 55/120
	I1004 04:17:03.340343   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 56/120
	I1004 04:17:04.342018   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 57/120
	I1004 04:17:05.343720   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 58/120
	I1004 04:17:06.345309   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 59/120
	I1004 04:17:07.346735   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 60/120
	I1004 04:17:08.347901   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 61/120
	I1004 04:17:09.349641   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 62/120
	I1004 04:17:10.351380   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 63/120
	I1004 04:17:11.353329   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 64/120
	I1004 04:17:12.355752   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 65/120
	I1004 04:17:13.357371   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 66/120
	I1004 04:17:14.359041   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 67/120
	I1004 04:17:15.360602   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 68/120
	I1004 04:17:16.362056   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 69/120
	I1004 04:17:17.363553   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 70/120
	I1004 04:17:18.365071   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 71/120
	I1004 04:17:19.366563   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 72/120
	I1004 04:17:20.368165   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 73/120
	I1004 04:17:21.369824   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 74/120
	I1004 04:17:22.372444   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 75/120
	I1004 04:17:23.373997   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 76/120
	I1004 04:17:24.375560   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 77/120
	I1004 04:17:25.377192   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 78/120
	I1004 04:17:26.378956   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 79/120
	I1004 04:17:27.381477   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 80/120
	I1004 04:17:28.383736   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 81/120
	I1004 04:17:29.385313   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 82/120
	I1004 04:17:30.386881   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 83/120
	I1004 04:17:31.388533   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 84/120
	I1004 04:17:32.390068   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 85/120
	I1004 04:17:33.391655   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 86/120
	I1004 04:17:34.393119   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 87/120
	I1004 04:17:35.395300   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 88/120
	I1004 04:17:36.396805   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 89/120
	I1004 04:17:37.398710   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 90/120
	I1004 04:17:38.400351   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 91/120
	I1004 04:17:39.402366   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 92/120
	I1004 04:17:40.403944   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 93/120
	I1004 04:17:41.406361   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 94/120
	I1004 04:17:42.408447   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 95/120
	I1004 04:17:43.410400   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 96/120
	I1004 04:17:44.411830   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 97/120
	I1004 04:17:45.413361   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 98/120
	I1004 04:17:46.414882   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 99/120
	I1004 04:17:47.417386   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 100/120
	I1004 04:17:48.418860   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 101/120
	I1004 04:17:49.420572   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 102/120
	I1004 04:17:50.422752   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 103/120
	I1004 04:17:51.424538   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 104/120
	I1004 04:17:52.427081   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 105/120
	I1004 04:17:53.428807   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 106/120
	I1004 04:17:54.430491   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 107/120
	I1004 04:17:55.431981   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 108/120
	I1004 04:17:56.434406   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 109/120
	I1004 04:17:57.437061   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 110/120
	I1004 04:17:58.438962   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 111/120
	I1004 04:17:59.440621   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 112/120
	I1004 04:18:00.442115   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 113/120
	I1004 04:18:01.444075   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 114/120
	I1004 04:18:02.446357   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 115/120
	I1004 04:18:03.447712   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 116/120
	I1004 04:18:04.449249   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 117/120
	I1004 04:18:05.451101   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 118/120
	I1004 04:18:06.452784   64046 main.go:141] libmachine: (no-preload-658545) Waiting for machine to stop 119/120
	I1004 04:18:07.453595   64046 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1004 04:18:07.453668   64046 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 04:18:07.455461   64046 out.go:201] 
	W1004 04:18:07.456792   64046 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1004 04:18:07.456813   64046 out.go:270] * 
	* 
	W1004 04:18:07.459273   64046 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 04:18:07.460491   64046 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-658545 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658545 -n no-preload-658545
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658545 -n no-preload-658545: exit status 3 (18.525507849s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:18:25.988096   66025 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host
	E1004 04:18:25.988126   66025 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-658545" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-934812 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-934812 --alsologtostderr -v=3: exit status 82 (2m0.504922818s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-934812"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 04:16:37.618063   64330 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:16:37.618614   64330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:16:37.618631   64330 out.go:358] Setting ErrFile to fd 2...
	I1004 04:16:37.618640   64330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:16:37.619091   64330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:16:37.619464   64330 out.go:352] Setting JSON to false
	I1004 04:16:37.619556   64330 mustload.go:65] Loading cluster: embed-certs-934812
	I1004 04:16:37.620592   64330 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:16:37.620710   64330 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/config.json ...
	I1004 04:16:37.620917   64330 mustload.go:65] Loading cluster: embed-certs-934812
	I1004 04:16:37.621052   64330 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:16:37.621085   64330 stop.go:39] StopHost: embed-certs-934812
	I1004 04:16:37.621501   64330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:16:37.621566   64330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:16:37.636267   64330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43295
	I1004 04:16:37.636818   64330 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:16:37.637372   64330 main.go:141] libmachine: Using API Version  1
	I1004 04:16:37.637394   64330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:16:37.637733   64330 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:16:37.640256   64330 out.go:177] * Stopping node "embed-certs-934812"  ...
	I1004 04:16:37.641814   64330 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1004 04:16:37.641846   64330 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:16:37.642104   64330 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1004 04:16:37.642129   64330 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:16:37.645500   64330 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:16:37.645899   64330 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:15:46 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:16:37.645945   64330 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:16:37.646108   64330 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:16:37.646312   64330 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:16:37.646467   64330 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:16:37.646625   64330 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:16:37.751133   64330 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1004 04:16:37.813018   64330 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1004 04:16:37.855747   64330 main.go:141] libmachine: Stopping "embed-certs-934812"...
	I1004 04:16:37.855773   64330 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:16:37.857512   64330 main.go:141] libmachine: (embed-certs-934812) Calling .Stop
	I1004 04:16:37.861133   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 0/120
	I1004 04:16:38.862424   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 1/120
	I1004 04:16:39.863805   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 2/120
	I1004 04:16:40.865327   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 3/120
	I1004 04:16:41.866928   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 4/120
	I1004 04:16:42.869019   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 5/120
	I1004 04:16:43.870320   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 6/120
	I1004 04:16:44.872079   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 7/120
	I1004 04:16:45.873665   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 8/120
	I1004 04:16:46.875188   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 9/120
	I1004 04:16:47.876762   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 10/120
	I1004 04:16:48.878488   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 11/120
	I1004 04:16:49.880284   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 12/120
	I1004 04:16:50.881942   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 13/120
	I1004 04:16:51.883565   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 14/120
	I1004 04:16:52.885969   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 15/120
	I1004 04:16:53.887528   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 16/120
	I1004 04:16:54.889354   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 17/120
	I1004 04:16:55.890766   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 18/120
	I1004 04:16:56.892553   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 19/120
	I1004 04:16:57.894917   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 20/120
	I1004 04:16:58.896456   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 21/120
	I1004 04:16:59.898230   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 22/120
	I1004 04:17:00.900085   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 23/120
	I1004 04:17:01.901749   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 24/120
	I1004 04:17:02.903815   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 25/120
	I1004 04:17:03.905467   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 26/120
	I1004 04:17:04.907074   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 27/120
	I1004 04:17:05.908829   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 28/120
	I1004 04:17:06.910491   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 29/120
	I1004 04:17:07.911980   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 30/120
	I1004 04:17:08.913496   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 31/120
	I1004 04:17:09.915119   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 32/120
	I1004 04:17:10.916704   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 33/120
	I1004 04:17:11.918336   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 34/120
	I1004 04:17:12.920715   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 35/120
	I1004 04:17:13.922341   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 36/120
	I1004 04:17:14.923968   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 37/120
	I1004 04:17:15.925735   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 38/120
	I1004 04:17:16.927313   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 39/120
	I1004 04:17:17.928975   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 40/120
	I1004 04:17:18.930909   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 41/120
	I1004 04:17:19.933050   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 42/120
	I1004 04:17:20.934598   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 43/120
	I1004 04:17:21.936331   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 44/120
	I1004 04:17:22.938345   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 45/120
	I1004 04:17:23.939918   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 46/120
	I1004 04:17:24.941409   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 47/120
	I1004 04:17:25.943466   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 48/120
	I1004 04:17:26.945011   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 49/120
	I1004 04:17:27.947144   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 50/120
	I1004 04:17:28.948665   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 51/120
	I1004 04:17:29.950825   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 52/120
	I1004 04:17:30.952223   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 53/120
	I1004 04:17:31.953632   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 54/120
	I1004 04:17:32.955598   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 55/120
	I1004 04:17:33.956958   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 56/120
	I1004 04:17:34.958434   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 57/120
	I1004 04:17:35.959769   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 58/120
	I1004 04:17:36.961290   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 59/120
	I1004 04:17:37.962850   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 60/120
	I1004 04:17:38.964254   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 61/120
	I1004 04:17:39.966216   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 62/120
	I1004 04:17:40.967676   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 63/120
	I1004 04:17:41.969197   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 64/120
	I1004 04:17:42.971422   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 65/120
	I1004 04:17:43.972963   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 66/120
	I1004 04:17:44.974427   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 67/120
	I1004 04:17:45.976083   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 68/120
	I1004 04:17:46.978389   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 69/120
	I1004 04:17:47.980805   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 70/120
	I1004 04:17:48.983119   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 71/120
	I1004 04:17:49.984664   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 72/120
	I1004 04:17:50.986285   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 73/120
	I1004 04:17:51.987817   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 74/120
	I1004 04:17:52.989927   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 75/120
	I1004 04:17:53.991595   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 76/120
	I1004 04:17:54.992983   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 77/120
	I1004 04:17:55.994175   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 78/120
	I1004 04:17:56.995866   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 79/120
	I1004 04:17:57.997415   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 80/120
	I1004 04:17:58.998924   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 81/120
	I1004 04:18:00.000707   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 82/120
	I1004 04:18:01.002441   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 83/120
	I1004 04:18:02.003773   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 84/120
	I1004 04:18:03.005951   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 85/120
	I1004 04:18:04.007683   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 86/120
	I1004 04:18:05.009478   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 87/120
	I1004 04:18:06.010993   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 88/120
	I1004 04:18:07.012795   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 89/120
	I1004 04:18:08.015108   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 90/120
	I1004 04:18:09.016508   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 91/120
	I1004 04:18:10.018688   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 92/120
	I1004 04:18:11.020272   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 93/120
	I1004 04:18:12.022701   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 94/120
	I1004 04:18:13.024692   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 95/120
	I1004 04:18:14.026333   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 96/120
	I1004 04:18:15.027902   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 97/120
	I1004 04:18:16.029204   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 98/120
	I1004 04:18:17.030683   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 99/120
	I1004 04:18:18.032866   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 100/120
	I1004 04:18:19.034173   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 101/120
	I1004 04:18:20.035624   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 102/120
	I1004 04:18:21.037297   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 103/120
	I1004 04:18:22.039032   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 104/120
	I1004 04:18:23.041153   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 105/120
	I1004 04:18:24.042691   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 106/120
	I1004 04:18:25.044287   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 107/120
	I1004 04:18:26.045712   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 108/120
	I1004 04:18:27.047286   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 109/120
	I1004 04:18:28.049699   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 110/120
	I1004 04:18:29.051242   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 111/120
	I1004 04:18:30.053027   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 112/120
	I1004 04:18:31.054405   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 113/120
	I1004 04:18:32.055868   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 114/120
	I1004 04:18:33.058291   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 115/120
	I1004 04:18:34.060313   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 116/120
	I1004 04:18:35.061925   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 117/120
	I1004 04:18:36.063409   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 118/120
	I1004 04:18:37.064995   64330 main.go:141] libmachine: (embed-certs-934812) Waiting for machine to stop 119/120
	I1004 04:18:38.066557   64330 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1004 04:18:38.066601   64330 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 04:18:38.068559   64330 out.go:201] 
	W1004 04:18:38.069964   64330 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1004 04:18:38.069981   64330 out.go:270] * 
	* 
	W1004 04:18:38.072747   64330 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 04:18:38.074679   64330 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-934812 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-934812 -n embed-certs-934812
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-934812 -n embed-certs-934812: exit status 3 (18.632155503s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:18:56.708171   66263 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.74:22: connect: no route to host
	E1004 04:18:56.708206   66263 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.74:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-934812" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658545 -n no-preload-658545
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658545 -n no-preload-658545: exit status 3 (3.16887019s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:18:29.156117   66106 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host
	E1004 04:18:29.156143   66106 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-658545 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1004 04:18:32.072282   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-658545 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151733922s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-658545 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658545 -n no-preload-658545
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658545 -n no-preload-658545: exit status 3 (3.062625048s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:18:38.372135   66217 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host
	E1004 04:18:38.372156   66217 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-658545" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-281471 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-281471 --alsologtostderr -v=3: exit status 82 (2m0.508911622s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-281471"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 04:18:51.736863   66459 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:18:51.737003   66459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:18:51.737015   66459 out.go:358] Setting ErrFile to fd 2...
	I1004 04:18:51.737021   66459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:18:51.737231   66459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:18:51.737489   66459 out.go:352] Setting JSON to false
	I1004 04:18:51.737586   66459 mustload.go:65] Loading cluster: default-k8s-diff-port-281471
	I1004 04:18:51.737960   66459 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:18:51.738043   66459 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/config.json ...
	I1004 04:18:51.738229   66459 mustload.go:65] Loading cluster: default-k8s-diff-port-281471
	I1004 04:18:51.738369   66459 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:18:51.738402   66459 stop.go:39] StopHost: default-k8s-diff-port-281471
	I1004 04:18:51.738832   66459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:18:51.738912   66459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:18:51.754004   66459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34595
	I1004 04:18:51.754591   66459 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:18:51.755249   66459 main.go:141] libmachine: Using API Version  1
	I1004 04:18:51.755279   66459 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:18:51.755639   66459 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:18:51.757915   66459 out.go:177] * Stopping node "default-k8s-diff-port-281471"  ...
	I1004 04:18:51.759165   66459 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1004 04:18:51.759186   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:18:51.759403   66459 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1004 04:18:51.759436   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:18:51.762033   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:18:51.762483   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:17:56 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:18:51.762525   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:18:51.762650   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:18:51.762811   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:18:51.762954   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:18:51.763097   66459 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:18:51.854382   66459 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1004 04:18:51.922803   66459 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1004 04:18:51.983947   66459 main.go:141] libmachine: Stopping "default-k8s-diff-port-281471"...
	I1004 04:18:51.983992   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:18:51.985644   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Stop
	I1004 04:18:51.989191   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 0/120
	I1004 04:18:52.990799   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 1/120
	I1004 04:18:53.992314   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 2/120
	I1004 04:18:54.993784   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 3/120
	I1004 04:18:55.995400   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 4/120
	I1004 04:18:56.997586   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 5/120
	I1004 04:18:57.999123   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 6/120
	I1004 04:18:59.001013   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 7/120
	I1004 04:19:00.002479   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 8/120
	I1004 04:19:01.003866   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 9/120
	I1004 04:19:02.006359   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 10/120
	I1004 04:19:03.007872   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 11/120
	I1004 04:19:04.009293   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 12/120
	I1004 04:19:05.010601   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 13/120
	I1004 04:19:06.012340   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 14/120
	I1004 04:19:07.014536   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 15/120
	I1004 04:19:08.016077   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 16/120
	I1004 04:19:09.018746   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 17/120
	I1004 04:19:10.020228   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 18/120
	I1004 04:19:11.022141   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 19/120
	I1004 04:19:12.023870   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 20/120
	I1004 04:19:13.025438   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 21/120
	I1004 04:19:14.027081   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 22/120
	I1004 04:19:15.028749   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 23/120
	I1004 04:19:16.030312   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 24/120
	I1004 04:19:17.032834   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 25/120
	I1004 04:19:18.034326   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 26/120
	I1004 04:19:19.036288   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 27/120
	I1004 04:19:20.037984   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 28/120
	I1004 04:19:21.039647   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 29/120
	I1004 04:19:22.041263   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 30/120
	I1004 04:19:23.042739   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 31/120
	I1004 04:19:24.044132   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 32/120
	I1004 04:19:25.045980   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 33/120
	I1004 04:19:26.047363   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 34/120
	I1004 04:19:27.049870   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 35/120
	I1004 04:19:28.051407   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 36/120
	I1004 04:19:29.052938   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 37/120
	I1004 04:19:30.054718   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 38/120
	I1004 04:19:31.056284   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 39/120
	I1004 04:19:32.057856   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 40/120
	I1004 04:19:33.059817   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 41/120
	I1004 04:19:34.061326   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 42/120
	I1004 04:19:35.062716   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 43/120
	I1004 04:19:36.064471   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 44/120
	I1004 04:19:37.066623   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 45/120
	I1004 04:19:38.068063   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 46/120
	I1004 04:19:39.070417   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 47/120
	I1004 04:19:40.072177   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 48/120
	I1004 04:19:41.074032   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 49/120
	I1004 04:19:42.075633   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 50/120
	I1004 04:19:43.077502   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 51/120
	I1004 04:19:44.079091   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 52/120
	I1004 04:19:45.080760   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 53/120
	I1004 04:19:46.082106   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 54/120
	I1004 04:19:47.084572   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 55/120
	I1004 04:19:48.086113   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 56/120
	I1004 04:19:49.087767   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 57/120
	I1004 04:19:50.089297   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 58/120
	I1004 04:19:51.090749   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 59/120
	I1004 04:19:52.092464   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 60/120
	I1004 04:19:53.093940   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 61/120
	I1004 04:19:54.095598   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 62/120
	I1004 04:19:55.097393   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 63/120
	I1004 04:19:56.098915   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 64/120
	I1004 04:19:57.101250   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 65/120
	I1004 04:19:58.102720   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 66/120
	I1004 04:19:59.104217   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 67/120
	I1004 04:20:00.105793   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 68/120
	I1004 04:20:01.107607   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 69/120
	I1004 04:20:02.109946   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 70/120
	I1004 04:20:03.111725   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 71/120
	I1004 04:20:04.113275   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 72/120
	I1004 04:20:05.115730   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 73/120
	I1004 04:20:06.117578   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 74/120
	I1004 04:20:07.119852   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 75/120
	I1004 04:20:08.121124   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 76/120
	I1004 04:20:09.122609   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 77/120
	I1004 04:20:10.124120   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 78/120
	I1004 04:20:11.125648   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 79/120
	I1004 04:20:12.127168   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 80/120
	I1004 04:20:13.128724   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 81/120
	I1004 04:20:14.130394   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 82/120
	I1004 04:20:15.131940   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 83/120
	I1004 04:20:16.133486   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 84/120
	I1004 04:20:17.136006   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 85/120
	I1004 04:20:18.137488   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 86/120
	I1004 04:20:19.138929   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 87/120
	I1004 04:20:20.140442   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 88/120
	I1004 04:20:21.142170   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 89/120
	I1004 04:20:22.144829   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 90/120
	I1004 04:20:23.146281   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 91/120
	I1004 04:20:24.147552   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 92/120
	I1004 04:20:25.148976   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 93/120
	I1004 04:20:26.150767   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 94/120
	I1004 04:20:27.153051   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 95/120
	I1004 04:20:28.154849   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 96/120
	I1004 04:20:29.156376   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 97/120
	I1004 04:20:30.157809   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 98/120
	I1004 04:20:31.159344   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 99/120
	I1004 04:20:32.160957   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 100/120
	I1004 04:20:33.162448   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 101/120
	I1004 04:20:34.163957   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 102/120
	I1004 04:20:35.165514   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 103/120
	I1004 04:20:36.167043   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 104/120
	I1004 04:20:37.169492   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 105/120
	I1004 04:20:38.171270   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 106/120
	I1004 04:20:39.172736   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 107/120
	I1004 04:20:40.174170   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 108/120
	I1004 04:20:41.175838   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 109/120
	I1004 04:20:42.177314   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 110/120
	I1004 04:20:43.179022   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 111/120
	I1004 04:20:44.180486   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 112/120
	I1004 04:20:45.182637   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 113/120
	I1004 04:20:46.184351   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 114/120
	I1004 04:20:47.186721   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 115/120
	I1004 04:20:48.188799   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 116/120
	I1004 04:20:49.190463   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 117/120
	I1004 04:20:50.192273   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 118/120
	I1004 04:20:51.194021   66459 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for machine to stop 119/120
	I1004 04:20:52.195294   66459 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1004 04:20:52.195347   66459 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 04:20:52.197411   66459 out.go:201] 
	W1004 04:20:52.199035   66459 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1004 04:20:52.199056   66459 out.go:270] * 
	* 
	W1004 04:20:52.201567   66459 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 04:20:52.203022   66459 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-281471 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471: exit status 3 (18.64735343s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:21:10.852243   67333 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	E1004 04:21:10.852263   67333 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-281471" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-420062 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-420062 create -f testdata/busybox.yaml: exit status 1 (43.474305ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-420062" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-420062 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062: exit status 6 (221.000726ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:18:54.029577   66519 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-420062" does not appear in /home/jenkins/minikube-integration/19546-9647/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-420062" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062: exit status 6 (219.930453ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:18:54.249816   66549 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-420062" does not appear in /home/jenkins/minikube-integration/19546-9647/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-420062" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (110.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-420062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-420062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m50.439255237s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-420062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-420062 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-420062 describe deploy/metrics-server -n kube-system: exit status 1 (43.323139ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-420062" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-420062 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062: exit status 6 (221.046177ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:20:44.953489   67151 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-420062" does not appear in /home/jenkins/minikube-integration/19546-9647/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-420062" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (110.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-934812 -n embed-certs-934812
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-934812 -n embed-certs-934812: exit status 3 (3.167707664s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:18:59.876119   66626 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.74:22: connect: no route to host
	E1004 04:18:59.876142   66626 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.74:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-934812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-934812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153856783s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.74:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-934812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-934812 -n embed-certs-934812
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-934812 -n embed-certs-934812: exit status 3 (3.062283018s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:19:09.092174   66690 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.74:22: connect: no route to host
	E1004 04:19:09.092201   66690 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.74:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-934812" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (676.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-420062 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-420062 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m13.617359895s)

                                                
                                                
-- stdout --
	* [old-k8s-version-420062] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-420062" primary control-plane node in "old-k8s-version-420062" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-420062" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 04:20:48.503387   67282 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:20:48.503651   67282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:20:48.503661   67282 out.go:358] Setting ErrFile to fd 2...
	I1004 04:20:48.503665   67282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:20:48.503890   67282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:20:48.504512   67282 out.go:352] Setting JSON to false
	I1004 04:20:48.505508   67282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7393,"bootTime":1728008255,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:20:48.505598   67282 start.go:139] virtualization: kvm guest
	I1004 04:20:48.507688   67282 out.go:177] * [old-k8s-version-420062] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:20:48.509150   67282 notify.go:220] Checking for updates...
	I1004 04:20:48.509194   67282 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:20:48.510664   67282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:20:48.512156   67282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:20:48.513521   67282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:20:48.514890   67282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:20:48.516510   67282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:20:48.518467   67282 config.go:182] Loaded profile config "old-k8s-version-420062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1004 04:20:48.518900   67282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:20:48.518940   67282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:20:48.534381   67282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41741
	I1004 04:20:48.534841   67282 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:20:48.535367   67282 main.go:141] libmachine: Using API Version  1
	I1004 04:20:48.535388   67282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:20:48.535706   67282 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:20:48.535946   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:20:48.537928   67282 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1004 04:20:48.539196   67282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:20:48.539520   67282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:20:48.539557   67282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:20:48.554608   67282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40945
	I1004 04:20:48.555019   67282 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:20:48.555501   67282 main.go:141] libmachine: Using API Version  1
	I1004 04:20:48.555523   67282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:20:48.555923   67282 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:20:48.556146   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:20:48.594115   67282 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 04:20:48.595228   67282 start.go:297] selected driver: kvm2
	I1004 04:20:48.595244   67282 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:20:48.595398   67282 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:20:48.596388   67282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:20:48.596492   67282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:20:48.611841   67282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:20:48.612371   67282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:20:48.612408   67282 cni.go:84] Creating CNI manager for ""
	I1004 04:20:48.612473   67282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:20:48.612521   67282 start.go:340] cluster config:
	{Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:20:48.612664   67282 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:20:48.614830   67282 out.go:177] * Starting "old-k8s-version-420062" primary control-plane node in "old-k8s-version-420062" cluster
	I1004 04:20:48.616153   67282 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 04:20:48.616201   67282 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1004 04:20:48.616217   67282 cache.go:56] Caching tarball of preloaded images
	I1004 04:20:48.616300   67282 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:20:48.616310   67282 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1004 04:20:48.616394   67282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/config.json ...
	I1004 04:20:48.616577   67282 start.go:360] acquireMachinesLock for old-k8s-version-420062: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:23:34.669084   67282 start.go:364] duration metric: took 2m46.052475725s to acquireMachinesLock for "old-k8s-version-420062"
	I1004 04:23:34.669158   67282 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:34.669168   67282 fix.go:54] fixHost starting: 
	I1004 04:23:34.669584   67282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:34.669640   67282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:34.686790   67282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I1004 04:23:34.687312   67282 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:34.687829   67282 main.go:141] libmachine: Using API Version  1
	I1004 04:23:34.687857   67282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:34.688238   67282 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:34.688415   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:34.688579   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetState
	I1004 04:23:34.690288   67282 fix.go:112] recreateIfNeeded on old-k8s-version-420062: state=Stopped err=<nil>
	I1004 04:23:34.690326   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	W1004 04:23:34.690467   67282 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:34.692283   67282 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-420062" ...
	I1004 04:23:34.693590   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .Start
	I1004 04:23:34.693792   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring networks are active...
	I1004 04:23:34.694582   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network default is active
	I1004 04:23:34.694917   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network mk-old-k8s-version-420062 is active
	I1004 04:23:34.695322   67282 main.go:141] libmachine: (old-k8s-version-420062) Getting domain xml...
	I1004 04:23:34.696052   67282 main.go:141] libmachine: (old-k8s-version-420062) Creating domain...
	I1004 04:23:35.995511   67282 main.go:141] libmachine: (old-k8s-version-420062) Waiting to get IP...
	I1004 04:23:35.996465   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:35.996962   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:35.997031   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:35.996923   68093 retry.go:31] will retry after 296.620059ms: waiting for machine to come up
	I1004 04:23:36.295737   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:36.296226   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:36.296257   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:36.296182   68093 retry.go:31] will retry after 311.736827ms: waiting for machine to come up
	I1004 04:23:36.610158   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:36.610804   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:36.610829   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:36.610759   68093 retry.go:31] will retry after 440.646496ms: waiting for machine to come up
	I1004 04:23:37.053487   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:37.053956   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:37.053981   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:37.053923   68093 retry.go:31] will retry after 550.190101ms: waiting for machine to come up
	I1004 04:23:37.605404   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:37.605775   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:37.605815   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:37.605743   68093 retry.go:31] will retry after 721.648529ms: waiting for machine to come up
	I1004 04:23:38.328819   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:38.329323   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:38.329362   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:38.329281   68093 retry.go:31] will retry after 825.234448ms: waiting for machine to come up
	I1004 04:23:39.155736   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:39.156199   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:39.156229   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:39.156150   68093 retry.go:31] will retry after 970.793402ms: waiting for machine to come up
	I1004 04:23:40.128963   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:40.129454   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:40.129507   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:40.129419   68093 retry.go:31] will retry after 1.460395601s: waiting for machine to come up
	I1004 04:23:41.592145   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:41.592653   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:41.592677   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:41.592600   68093 retry.go:31] will retry after 1.397092356s: waiting for machine to come up
	I1004 04:23:42.992176   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:42.992670   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:42.992724   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:42.992663   68093 retry.go:31] will retry after 1.560294099s: waiting for machine to come up
	I1004 04:23:44.555619   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:44.556128   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:44.556154   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:44.556061   68093 retry.go:31] will retry after 2.564674777s: waiting for machine to come up
	I1004 04:23:47.123819   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:47.124235   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:47.124263   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:47.124181   68093 retry.go:31] will retry after 2.408805702s: waiting for machine to come up
	I1004 04:23:49.535979   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:49.536361   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:49.536388   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:49.536332   68093 retry.go:31] will retry after 4.242056709s: waiting for machine to come up
	I1004 04:23:53.783128   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.783631   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has current primary IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.783669   67282 main.go:141] libmachine: (old-k8s-version-420062) Found IP for machine: 192.168.50.146
	I1004 04:23:53.783684   67282 main.go:141] libmachine: (old-k8s-version-420062) Reserving static IP address...
	I1004 04:23:53.784173   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.784206   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | skip adding static IP to network mk-old-k8s-version-420062 - found existing host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"}
	I1004 04:23:53.784222   67282 main.go:141] libmachine: (old-k8s-version-420062) Reserved static IP address: 192.168.50.146
	I1004 04:23:53.784238   67282 main.go:141] libmachine: (old-k8s-version-420062) Waiting for SSH to be available...
	I1004 04:23:53.784250   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Getting to WaitForSSH function...
	I1004 04:23:53.786551   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.786985   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.787016   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.787207   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH client type: external
	I1004 04:23:53.787244   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa (-rw-------)
	I1004 04:23:53.787285   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:23:53.787301   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | About to run SSH command:
	I1004 04:23:53.787315   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | exit 0
	I1004 04:23:53.916121   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | SSH cmd err, output: <nil>: 
	I1004 04:23:53.916487   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetConfigRaw
	I1004 04:23:53.917200   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:53.919846   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.920295   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.920323   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.920641   67282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/config.json ...
	I1004 04:23:53.920902   67282 machine.go:93] provisionDockerMachine start ...
	I1004 04:23:53.920930   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:53.921137   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:53.923647   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.924000   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.924039   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.924198   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:53.924375   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:53.924508   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:53.924659   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:53.924796   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:53.925024   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:53.925036   67282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:23:54.044565   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:23:54.044595   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.044820   67282 buildroot.go:166] provisioning hostname "old-k8s-version-420062"
	I1004 04:23:54.044837   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.045006   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.047682   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.048032   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.048060   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.048186   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.048376   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.048525   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.048694   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.048853   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.049077   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.049098   67282 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-420062 && echo "old-k8s-version-420062" | sudo tee /etc/hostname
	I1004 04:23:54.183772   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-420062
	
	I1004 04:23:54.183835   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.186969   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.187333   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.187368   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.187754   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.188000   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.188177   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.188334   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.188559   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.188778   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.188803   67282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-420062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-420062/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-420062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:23:54.313827   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:54.313852   67282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:23:54.313896   67282 buildroot.go:174] setting up certificates
	I1004 04:23:54.313913   67282 provision.go:84] configureAuth start
	I1004 04:23:54.313925   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.314208   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:54.317028   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.317378   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.317408   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.317549   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.320292   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.320690   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.320718   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.320874   67282 provision.go:143] copyHostCerts
	I1004 04:23:54.320945   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:23:54.320957   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:23:54.321020   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:23:54.321144   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:23:54.321157   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:23:54.321184   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:23:54.321269   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:23:54.321279   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:23:54.321306   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:23:54.321378   67282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-420062 san=[127.0.0.1 192.168.50.146 localhost minikube old-k8s-version-420062]
	I1004 04:23:54.395370   67282 provision.go:177] copyRemoteCerts
	I1004 04:23:54.395422   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:23:54.395452   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.398647   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.399153   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.399194   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.399392   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.399582   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.399852   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.399991   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:54.491055   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:23:54.523206   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1004 04:23:54.549843   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:23:54.580403   67282 provision.go:87] duration metric: took 266.475364ms to configureAuth
	I1004 04:23:54.580438   67282 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:23:54.580645   67282 config.go:182] Loaded profile config "old-k8s-version-420062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1004 04:23:54.580736   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.583200   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.583489   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.583522   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.583672   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.583871   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.584066   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.584195   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.584402   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.584567   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.584582   67282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:23:54.835402   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:23:54.835436   67282 machine.go:96] duration metric: took 914.509404ms to provisionDockerMachine
	I1004 04:23:54.835451   67282 start.go:293] postStartSetup for "old-k8s-version-420062" (driver="kvm2")
	I1004 04:23:54.835466   67282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:23:54.835491   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:54.835870   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:23:54.835902   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.838257   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.838645   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.838670   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.838810   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.838972   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.839117   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.839247   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:54.927041   67282 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:23:54.931330   67282 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:23:54.931357   67282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:23:54.931424   67282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:23:54.931538   67282 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:23:54.931658   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:23:54.941402   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:54.967433   67282 start.go:296] duration metric: took 131.968424ms for postStartSetup
	I1004 04:23:54.967495   67282 fix.go:56] duration metric: took 20.29830643s for fixHost
	I1004 04:23:54.967523   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.970138   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.970485   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.970502   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.970802   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.971000   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.971164   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.971330   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.971560   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.971739   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.971751   67282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:23:55.089031   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015835.056238818
	
	I1004 04:23:55.089054   67282 fix.go:216] guest clock: 1728015835.056238818
	I1004 04:23:55.089063   67282 fix.go:229] Guest: 2024-10-04 04:23:55.056238818 +0000 UTC Remote: 2024-10-04 04:23:54.967501465 +0000 UTC m=+186.499621032 (delta=88.737353ms)
	I1004 04:23:55.089086   67282 fix.go:200] guest clock delta is within tolerance: 88.737353ms
	I1004 04:23:55.089093   67282 start.go:83] releasing machines lock for "old-k8s-version-420062", held for 20.419961099s
	I1004 04:23:55.089124   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.089472   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:55.092047   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.092519   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.092552   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.092784   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093364   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093566   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093670   67282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:23:55.093715   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:55.093808   67282 ssh_runner.go:195] Run: cat /version.json
	I1004 04:23:55.093834   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:55.096451   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.096829   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.096862   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.096881   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.097173   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:55.097364   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:55.097446   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.097474   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.097548   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:55.097685   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:55.097816   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:55.097823   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:55.097953   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:55.098106   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:55.207195   67282 ssh_runner.go:195] Run: systemctl --version
	I1004 04:23:55.214080   67282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:23:55.369882   67282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:23:55.376111   67282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:23:55.376171   67282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:23:55.393916   67282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:23:55.393945   67282 start.go:495] detecting cgroup driver to use...
	I1004 04:23:55.394015   67282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:23:55.411330   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:23:55.427665   67282 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:23:55.427734   67282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:23:55.445180   67282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:23:55.465131   67282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:23:55.596260   67282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:23:55.781647   67282 docker.go:233] disabling docker service ...
	I1004 04:23:55.781711   67282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:23:55.801252   67282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:23:55.817688   67282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:23:55.952563   67282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:23:56.081096   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:23:56.096194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:23:56.116859   67282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1004 04:23:56.116924   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.129060   67282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:23:56.129133   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.141246   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.158759   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.172580   67282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:23:56.192027   67282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:23:56.206698   67282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:23:56.206757   67282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:23:56.223074   67282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:23:56.241061   67282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:56.365616   67282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:23:56.474445   67282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:23:56.474519   67282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:23:56.480077   67282 start.go:563] Will wait 60s for crictl version
	I1004 04:23:56.480133   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:23:56.485207   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:23:56.537710   67282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:23:56.537802   67282 ssh_runner.go:195] Run: crio --version
	I1004 04:23:56.571679   67282 ssh_runner.go:195] Run: crio --version
	I1004 04:23:56.605639   67282 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1004 04:23:56.606945   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:56.610421   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:56.610952   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:56.610976   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:56.611373   67282 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1004 04:23:56.615872   67282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:56.629783   67282 kubeadm.go:883] updating cluster {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:23:56.629932   67282 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 04:23:56.629983   67282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:56.690260   67282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:23:56.690343   67282 ssh_runner.go:195] Run: which lz4
	I1004 04:23:56.695808   67282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:23:56.701593   67282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:23:56.701623   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1004 04:23:58.525796   67282 crio.go:462] duration metric: took 1.830039762s to copy over tarball
	I1004 04:23:58.525868   67282 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:24:01.514552   67282 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.98865618s)
	I1004 04:24:01.514585   67282 crio.go:469] duration metric: took 2.988759159s to extract the tarball
	I1004 04:24:01.514595   67282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:24:01.562130   67282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:01.598856   67282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:24:01.598882   67282 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:24:01.598960   67282 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:01.599035   67282 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.599047   67282 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.599048   67282 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1004 04:24:01.599020   67282 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.599025   67282 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.598967   67282 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.598967   67282 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.600760   67282 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.600772   67282 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1004 04:24:01.600767   67282 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:01.600791   67282 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.600802   67282 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.600804   67282 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.600807   67282 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.600840   67282 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.837527   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.877366   67282 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1004 04:24:01.877413   67282 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.877464   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:01.882328   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.914693   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.934055   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.941737   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.943929   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.944540   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.948337   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.970977   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.995537   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1004 04:24:02.127073   67282 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1004 04:24:02.127097   67282 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1004 04:24:02.127121   67282 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.127121   67282 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.127156   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.127159   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128471   67282 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1004 04:24:02.128532   67282 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.128535   67282 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1004 04:24:02.128560   67282 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.128571   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128595   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128598   67282 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1004 04:24:02.128627   67282 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.128669   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128730   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1004 04:24:02.128761   67282 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1004 04:24:02.128783   67282 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1004 04:24:02.128815   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.133675   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.133724   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.141911   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.141950   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.141989   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.142044   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.263733   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.263744   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.263798   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.265990   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.297523   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.297566   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.379282   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.379318   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.379331   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.417271   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.454521   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.454559   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.496644   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1004 04:24:02.533632   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1004 04:24:02.533690   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1004 04:24:02.533750   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1004 04:24:02.568138   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1004 04:24:02.568153   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1004 04:24:02.911933   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:03.055844   67282 cache_images.go:92] duration metric: took 1.456943316s to LoadCachedImages
	W1004 04:24:03.055959   67282 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1004 04:24:03.055976   67282 kubeadm.go:934] updating node { 192.168.50.146 8443 v1.20.0 crio true true} ...
	I1004 04:24:03.056087   67282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-420062 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:03.056162   67282 ssh_runner.go:195] Run: crio config
	I1004 04:24:03.103752   67282 cni.go:84] Creating CNI manager for ""
	I1004 04:24:03.103792   67282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:03.103805   67282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:03.103826   67282 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.146 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-420062 NodeName:old-k8s-version-420062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1004 04:24:03.103952   67282 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-420062"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:03.104008   67282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1004 04:24:03.114316   67282 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:03.114372   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:03.124059   67282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1004 04:24:03.143310   67282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:03.161143   67282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1004 04:24:03.178444   67282 ssh_runner.go:195] Run: grep 192.168.50.146	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:03.182235   67282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:03.195103   67282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:03.317820   67282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:03.334820   67282 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062 for IP: 192.168.50.146
	I1004 04:24:03.334840   67282 certs.go:194] generating shared ca certs ...
	I1004 04:24:03.334855   67282 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:03.335008   67282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:03.335049   67282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:03.335059   67282 certs.go:256] generating profile certs ...
	I1004 04:24:03.335156   67282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.key
	I1004 04:24:03.335212   67282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key.c1f9ed6b
	I1004 04:24:03.335260   67282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key
	I1004 04:24:03.335368   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:03.335394   67282 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:03.335401   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:03.335426   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:03.335451   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:03.335476   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:03.335518   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:03.336260   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:03.373985   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:03.408150   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:03.444219   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:03.493160   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1004 04:24:03.533084   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:24:03.565405   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:03.613938   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:24:03.642711   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:03.674784   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:03.706968   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:03.731329   67282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:03.749003   67282 ssh_runner.go:195] Run: openssl version
	I1004 04:24:03.755219   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:03.766499   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.771322   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.771413   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.778185   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:03.790581   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:03.802556   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.807312   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.807373   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.813595   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:03.825043   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:03.835389   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.840004   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.840051   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.847540   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:03.862303   67282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:03.868029   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:03.874811   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:03.880797   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:03.886622   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:03.892273   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:03.898129   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:03.905775   67282 kubeadm.go:392] StartCluster: {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:03.905852   67282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:03.905890   67282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:03.954627   67282 cri.go:89] found id: ""
	I1004 04:24:03.954702   67282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:03.965146   67282 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:03.965170   67282 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:03.965236   67282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:03.975404   67282 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:03.976362   67282 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-420062" does not appear in /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:24:03.976990   67282 kubeconfig.go:62] /home/jenkins/minikube-integration/19546-9647/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-420062" cluster setting kubeconfig missing "old-k8s-version-420062" context setting]
	I1004 04:24:03.977906   67282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:03.979485   67282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:03.989487   67282 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.146
	I1004 04:24:03.989517   67282 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:03.989529   67282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:03.989577   67282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:04.031536   67282 cri.go:89] found id: ""
	I1004 04:24:04.031607   67282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:04.048652   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:04.057813   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:04.057830   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:04.057867   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:24:04.066213   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:04.066252   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:04.074904   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:24:04.083485   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:04.083522   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:04.092314   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:24:04.100528   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:04.100572   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:04.109232   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:24:04.118051   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:04.118091   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:04.127430   67282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:04.137949   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:04.272627   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:04.940435   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.181288   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.268873   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.373549   67282 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:05.373653   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:05.874543   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.374154   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.874343   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:07.374744   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:07.874734   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:08.374255   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:08.874627   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:09.374627   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:09.874278   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.374675   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.873949   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:11.373966   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:11.873775   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:12.373874   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:12.874010   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:13.374575   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:13.873857   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:14.374241   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:14.873863   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.374063   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.873950   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:16.373819   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:16.874290   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:17.374357   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:17.874163   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:18.374160   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:18.874214   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.374670   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.874355   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:20.374335   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:20.874299   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:21.374492   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:21.874293   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:22.373890   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:22.874622   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.374639   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.873822   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:24.373911   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:24.874756   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:25.374035   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:25.873874   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.374503   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.874371   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:27.374335   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:27.873941   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:28.373861   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:28.874265   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:29.374364   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:29.874581   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:30.373909   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:30.874089   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.374708   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.874696   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:32.374061   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:32.874233   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:33.374290   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:33.874344   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:34.374158   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:34.873848   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:35.373944   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:35.874697   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.373831   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.874231   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:37.374723   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:37.873861   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:38.374206   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:38.873705   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:39.374361   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:39.874144   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.373793   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.873796   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:41.374744   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:41.874442   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:42.374561   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:42.874638   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:43.374677   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:43.874583   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:44.374117   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:44.874398   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.374755   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.874039   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:46.374598   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:46.874446   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:47.374384   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:47.874596   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:48.374021   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:48.874471   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:49.374480   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:49.874689   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.373726   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.874543   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:51.373743   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:51.874513   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:52.374719   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:52.874305   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:53.374419   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:53.874725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.373903   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.874127   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.374051   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.874019   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:56.373828   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:56.874027   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:57.373914   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:57.874598   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:58.374106   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:58.874143   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:59.373810   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:59.874682   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:00.374672   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:00.873725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.374175   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.874724   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:02.374725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:02.874746   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:03.373689   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:03.874594   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:04.374498   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:04.874377   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:05.374050   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:05.374139   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:05.412153   67282 cri.go:89] found id: ""
	I1004 04:25:05.412185   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.412195   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:05.412202   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:05.412264   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:05.446725   67282 cri.go:89] found id: ""
	I1004 04:25:05.446750   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.446758   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:05.446763   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:05.446816   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:05.487652   67282 cri.go:89] found id: ""
	I1004 04:25:05.487678   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.487686   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:05.487691   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:05.487752   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:05.526275   67282 cri.go:89] found id: ""
	I1004 04:25:05.526302   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.526310   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:05.526319   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:05.526375   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:05.565004   67282 cri.go:89] found id: ""
	I1004 04:25:05.565034   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.565045   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:05.565052   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:05.565101   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:05.601963   67282 cri.go:89] found id: ""
	I1004 04:25:05.601990   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.601998   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:05.602003   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:05.602051   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:05.638621   67282 cri.go:89] found id: ""
	I1004 04:25:05.638651   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.638660   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:05.638666   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:05.638720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:05.678042   67282 cri.go:89] found id: ""
	I1004 04:25:05.678071   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.678082   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:05.678093   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:05.678107   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:05.720677   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:05.720707   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:05.775219   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:05.775252   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:05.789748   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:05.789774   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:05.918752   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:05.918783   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:05.918798   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:08.493206   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:08.506490   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:08.506549   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:08.545875   67282 cri.go:89] found id: ""
	I1004 04:25:08.545909   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.545920   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:08.545933   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:08.545997   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:08.582348   67282 cri.go:89] found id: ""
	I1004 04:25:08.582375   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.582383   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:08.582389   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:08.582438   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:08.637763   67282 cri.go:89] found id: ""
	I1004 04:25:08.637797   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.637809   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:08.637816   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:08.637890   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:08.681171   67282 cri.go:89] found id: ""
	I1004 04:25:08.681205   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.681216   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:08.681224   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:08.681289   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:08.719513   67282 cri.go:89] found id: ""
	I1004 04:25:08.719542   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.719549   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:08.719555   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:08.719607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:08.762152   67282 cri.go:89] found id: ""
	I1004 04:25:08.762175   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.762183   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:08.762188   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:08.762251   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:08.799857   67282 cri.go:89] found id: ""
	I1004 04:25:08.799881   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.799892   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:08.799903   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:08.799954   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:08.835264   67282 cri.go:89] found id: ""
	I1004 04:25:08.835296   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.835308   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:08.835318   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:08.835330   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:08.875501   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:08.875532   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:08.929145   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:08.929178   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:08.942769   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:08.942808   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:09.025372   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:09.025401   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:09.025416   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:11.611179   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:11.625118   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:11.625253   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:11.661512   67282 cri.go:89] found id: ""
	I1004 04:25:11.661540   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.661547   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:11.661553   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:11.661607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:11.704902   67282 cri.go:89] found id: ""
	I1004 04:25:11.704931   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.704941   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:11.704948   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:11.705007   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:11.741747   67282 cri.go:89] found id: ""
	I1004 04:25:11.741770   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.741780   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:11.741787   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:11.741841   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:11.776838   67282 cri.go:89] found id: ""
	I1004 04:25:11.776863   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.776871   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:11.776876   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:11.776927   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:11.812996   67282 cri.go:89] found id: ""
	I1004 04:25:11.813024   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.813033   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:11.813038   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:11.813097   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:11.853718   67282 cri.go:89] found id: ""
	I1004 04:25:11.853744   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.853752   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:11.853758   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:11.853813   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:11.896840   67282 cri.go:89] found id: ""
	I1004 04:25:11.896867   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.896879   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:11.896885   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:11.896943   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:11.932529   67282 cri.go:89] found id: ""
	I1004 04:25:11.932552   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.932561   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:11.932569   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:11.932580   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:11.946504   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:11.946538   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:12.024692   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:12.024713   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:12.024724   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:12.111942   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:12.111976   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:12.156483   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:12.156522   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:14.708243   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:14.722943   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:14.723007   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:14.758502   67282 cri.go:89] found id: ""
	I1004 04:25:14.758555   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.758567   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:14.758575   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:14.758633   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:14.796496   67282 cri.go:89] found id: ""
	I1004 04:25:14.796525   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.796532   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:14.796538   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:14.796595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:14.832216   67282 cri.go:89] found id: ""
	I1004 04:25:14.832247   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.832259   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:14.832266   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:14.832330   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:14.868461   67282 cri.go:89] found id: ""
	I1004 04:25:14.868491   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.868501   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:14.868509   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:14.868568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:14.909827   67282 cri.go:89] found id: ""
	I1004 04:25:14.909857   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.909867   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:14.909875   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:14.909949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:14.947809   67282 cri.go:89] found id: ""
	I1004 04:25:14.947839   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.947850   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:14.947857   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:14.947904   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:14.984073   67282 cri.go:89] found id: ""
	I1004 04:25:14.984101   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.984110   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:14.984115   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:14.984170   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:15.021145   67282 cri.go:89] found id: ""
	I1004 04:25:15.021179   67282 logs.go:282] 0 containers: []
	W1004 04:25:15.021191   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:15.021204   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:15.021217   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:15.075295   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:15.075328   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:15.088953   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:15.088980   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:15.175103   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:15.175128   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:15.175143   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:15.259004   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:15.259044   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:17.825029   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:17.839496   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:17.839574   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:17.877643   67282 cri.go:89] found id: ""
	I1004 04:25:17.877673   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.877684   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:17.877692   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:17.877751   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:17.921534   67282 cri.go:89] found id: ""
	I1004 04:25:17.921563   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.921574   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:17.921581   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:17.921634   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:17.961281   67282 cri.go:89] found id: ""
	I1004 04:25:17.961307   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.961315   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:17.961320   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:17.961386   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:18.001036   67282 cri.go:89] found id: ""
	I1004 04:25:18.001066   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.001078   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:18.001085   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:18.001156   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:18.043212   67282 cri.go:89] found id: ""
	I1004 04:25:18.043241   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.043252   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:18.043259   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:18.043319   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:18.082399   67282 cri.go:89] found id: ""
	I1004 04:25:18.082423   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.082430   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:18.082435   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:18.082493   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:18.120507   67282 cri.go:89] found id: ""
	I1004 04:25:18.120534   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.120544   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:18.120550   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:18.120605   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:18.156601   67282 cri.go:89] found id: ""
	I1004 04:25:18.156629   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.156640   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:18.156650   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:18.156663   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:18.198393   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:18.198424   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:18.250992   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:18.251032   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:18.267984   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:18.268015   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:18.343283   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:18.343303   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:18.343314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:20.922578   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:20.938037   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:20.938122   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:20.978389   67282 cri.go:89] found id: ""
	I1004 04:25:20.978417   67282 logs.go:282] 0 containers: []
	W1004 04:25:20.978426   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:20.978431   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:20.978478   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:21.033490   67282 cri.go:89] found id: ""
	I1004 04:25:21.033520   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.033528   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:21.033533   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:21.033589   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:21.087168   67282 cri.go:89] found id: ""
	I1004 04:25:21.087198   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.087209   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:21.087216   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:21.087299   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:21.144327   67282 cri.go:89] found id: ""
	I1004 04:25:21.144356   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.144366   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:21.144373   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:21.144431   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:21.183336   67282 cri.go:89] found id: ""
	I1004 04:25:21.183378   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.183390   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:21.183397   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:21.183459   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:21.221847   67282 cri.go:89] found id: ""
	I1004 04:25:21.221878   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.221892   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:21.221901   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:21.221961   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:21.258542   67282 cri.go:89] found id: ""
	I1004 04:25:21.258573   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.258584   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:21.258590   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:21.258652   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:21.303173   67282 cri.go:89] found id: ""
	I1004 04:25:21.303202   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.303211   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:21.303218   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:21.303243   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:21.358109   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:21.358146   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:21.373958   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:21.373987   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:21.450956   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:21.450980   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:21.451006   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:21.534763   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:21.534807   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:24.082856   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:24.098263   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:24.098336   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:24.144969   67282 cri.go:89] found id: ""
	I1004 04:25:24.144999   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.145009   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:24.145015   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:24.145072   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:24.185670   67282 cri.go:89] found id: ""
	I1004 04:25:24.185693   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.185702   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:24.185708   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:24.185769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:24.223657   67282 cri.go:89] found id: ""
	I1004 04:25:24.223691   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.223703   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:24.223710   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:24.223769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:24.261841   67282 cri.go:89] found id: ""
	I1004 04:25:24.261864   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.261872   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:24.261878   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:24.261938   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:24.299734   67282 cri.go:89] found id: ""
	I1004 04:25:24.299758   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.299769   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:24.299775   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:24.299867   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:24.337413   67282 cri.go:89] found id: ""
	I1004 04:25:24.337440   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.337450   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:24.337457   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:24.337523   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:24.375963   67282 cri.go:89] found id: ""
	I1004 04:25:24.375995   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.376007   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:24.376014   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:24.376073   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:24.415978   67282 cri.go:89] found id: ""
	I1004 04:25:24.416010   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.416021   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:24.416030   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:24.416045   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:24.458703   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:24.458738   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:24.510669   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:24.510704   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:24.525646   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:24.525687   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:24.603280   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:24.603310   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:24.603324   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:27.184935   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:27.200241   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:27.200321   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:27.237546   67282 cri.go:89] found id: ""
	I1004 04:25:27.237576   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.237588   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:27.237596   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:27.237653   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:27.272598   67282 cri.go:89] found id: ""
	I1004 04:25:27.272625   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.272634   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:27.272642   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:27.272700   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:27.306659   67282 cri.go:89] found id: ""
	I1004 04:25:27.306693   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.306706   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:27.306715   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:27.306779   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:27.344315   67282 cri.go:89] found id: ""
	I1004 04:25:27.344349   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.344363   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:27.344370   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:27.344428   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:27.380231   67282 cri.go:89] found id: ""
	I1004 04:25:27.380267   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.380278   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:27.380286   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:27.380346   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:27.418137   67282 cri.go:89] found id: ""
	I1004 04:25:27.418161   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.418169   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:27.418174   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:27.418225   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:27.458235   67282 cri.go:89] found id: ""
	I1004 04:25:27.458262   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.458283   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:27.458289   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:27.458342   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:27.495161   67282 cri.go:89] found id: ""
	I1004 04:25:27.495189   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.495198   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:27.495206   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:27.495217   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:27.547749   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:27.547795   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:27.563322   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:27.563355   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:27.636682   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:27.636710   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:27.636725   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:27.711316   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:27.711354   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:30.250361   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:30.265789   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:30.265866   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:30.305127   67282 cri.go:89] found id: ""
	I1004 04:25:30.305166   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.305183   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:30.305190   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:30.305258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:30.346529   67282 cri.go:89] found id: ""
	I1004 04:25:30.346560   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.346570   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:30.346577   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:30.346641   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:30.387368   67282 cri.go:89] found id: ""
	I1004 04:25:30.387407   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.387418   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:30.387425   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:30.387489   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:30.428193   67282 cri.go:89] found id: ""
	I1004 04:25:30.428230   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.428242   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:30.428248   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:30.428308   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:30.465484   67282 cri.go:89] found id: ""
	I1004 04:25:30.465509   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.465518   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:30.465523   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:30.465573   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:30.501133   67282 cri.go:89] found id: ""
	I1004 04:25:30.501163   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.501174   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:30.501181   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:30.501248   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:30.536492   67282 cri.go:89] found id: ""
	I1004 04:25:30.536519   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.536530   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:30.536536   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:30.536587   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:30.571721   67282 cri.go:89] found id: ""
	I1004 04:25:30.571745   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.571753   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:30.571761   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:30.571771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:30.626922   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:30.626958   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:30.641817   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:30.641852   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:30.725604   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:30.725633   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:30.725647   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:30.800359   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:30.800393   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:33.340747   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:33.355862   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:33.355936   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:33.397628   67282 cri.go:89] found id: ""
	I1004 04:25:33.397655   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.397662   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:33.397668   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:33.397718   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:33.442100   67282 cri.go:89] found id: ""
	I1004 04:25:33.442128   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.442137   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:33.442142   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:33.442187   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:33.481035   67282 cri.go:89] found id: ""
	I1004 04:25:33.481063   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.481076   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:33.481083   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:33.481149   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:33.516633   67282 cri.go:89] found id: ""
	I1004 04:25:33.516661   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.516669   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:33.516677   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:33.516727   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:33.556569   67282 cri.go:89] found id: ""
	I1004 04:25:33.556600   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.556610   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:33.556617   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:33.556679   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:33.591678   67282 cri.go:89] found id: ""
	I1004 04:25:33.591715   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.591724   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:33.591731   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:33.591786   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:33.626571   67282 cri.go:89] found id: ""
	I1004 04:25:33.626594   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.626602   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:33.626607   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:33.626650   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:33.664336   67282 cri.go:89] found id: ""
	I1004 04:25:33.664359   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.664367   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:33.664375   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:33.664386   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:33.748013   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:33.748047   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:33.786730   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:33.786767   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:33.839355   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:33.839392   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:33.853807   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:33.853835   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:33.920183   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:36.420485   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:36.435150   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:36.435221   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:36.471818   67282 cri.go:89] found id: ""
	I1004 04:25:36.471842   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.471850   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:36.471855   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:36.471908   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:36.511469   67282 cri.go:89] found id: ""
	I1004 04:25:36.511496   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.511504   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:36.511509   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:36.511557   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:36.552607   67282 cri.go:89] found id: ""
	I1004 04:25:36.552633   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.552641   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:36.552646   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:36.552702   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:36.596260   67282 cri.go:89] found id: ""
	I1004 04:25:36.596282   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.596290   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:36.596295   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:36.596340   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:36.636674   67282 cri.go:89] found id: ""
	I1004 04:25:36.636700   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.636708   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:36.636713   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:36.636764   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:36.675155   67282 cri.go:89] found id: ""
	I1004 04:25:36.675194   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.675206   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:36.675214   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:36.675279   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:36.713458   67282 cri.go:89] found id: ""
	I1004 04:25:36.713485   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.713493   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:36.713498   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:36.713552   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:36.754567   67282 cri.go:89] found id: ""
	I1004 04:25:36.754596   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.754607   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:36.754618   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:36.754631   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:36.824413   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:36.824439   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:36.824453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:36.900438   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:36.900471   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:36.942238   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:36.942264   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:36.992527   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:36.992556   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:39.506599   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:39.520782   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:39.520854   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:39.561853   67282 cri.go:89] found id: ""
	I1004 04:25:39.561880   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.561891   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:39.561898   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:39.561955   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:39.597548   67282 cri.go:89] found id: ""
	I1004 04:25:39.597581   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.597591   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:39.597598   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:39.597659   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:39.634481   67282 cri.go:89] found id: ""
	I1004 04:25:39.634517   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.634525   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:39.634530   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:39.634575   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:39.677077   67282 cri.go:89] found id: ""
	I1004 04:25:39.677107   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.677117   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:39.677124   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:39.677185   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:39.716334   67282 cri.go:89] found id: ""
	I1004 04:25:39.716356   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.716364   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:39.716369   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:39.716416   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:39.754765   67282 cri.go:89] found id: ""
	I1004 04:25:39.754792   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.754803   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:39.754810   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:39.754863   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:39.788782   67282 cri.go:89] found id: ""
	I1004 04:25:39.788811   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.788824   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:39.788832   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:39.788890   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:39.821946   67282 cri.go:89] found id: ""
	I1004 04:25:39.821970   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.821979   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:39.821988   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:39.822001   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:39.892629   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:39.892657   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:39.892674   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:39.973480   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:39.973515   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:40.018175   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:40.018203   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:40.068585   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:40.068620   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:42.583639   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:42.597249   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:42.597333   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:42.631993   67282 cri.go:89] found id: ""
	I1004 04:25:42.632020   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.632030   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:42.632037   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:42.632091   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:42.669708   67282 cri.go:89] found id: ""
	I1004 04:25:42.669739   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.669749   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:42.669762   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:42.669836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:42.705995   67282 cri.go:89] found id: ""
	I1004 04:25:42.706019   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.706030   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:42.706037   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:42.706094   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:42.740436   67282 cri.go:89] found id: ""
	I1004 04:25:42.740458   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.740466   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:42.740472   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:42.740524   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:42.774516   67282 cri.go:89] found id: ""
	I1004 04:25:42.774546   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.774557   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:42.774564   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:42.774614   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:42.807471   67282 cri.go:89] found id: ""
	I1004 04:25:42.807502   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.807510   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:42.807516   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:42.807561   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:42.851943   67282 cri.go:89] found id: ""
	I1004 04:25:42.851968   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.851977   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:42.851983   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:42.852040   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:42.887762   67282 cri.go:89] found id: ""
	I1004 04:25:42.887801   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.887812   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:42.887822   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:42.887834   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:42.960398   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:42.960423   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:42.960440   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:43.040078   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:43.040117   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:43.081614   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:43.081638   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:43.132744   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:43.132781   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:45.647332   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:45.660765   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:45.660834   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:45.696351   67282 cri.go:89] found id: ""
	I1004 04:25:45.696379   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.696390   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:45.696397   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:45.696449   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:45.738529   67282 cri.go:89] found id: ""
	I1004 04:25:45.738553   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.738561   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:45.738566   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:45.738621   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:45.773071   67282 cri.go:89] found id: ""
	I1004 04:25:45.773094   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.773103   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:45.773110   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:45.773165   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:45.810813   67282 cri.go:89] found id: ""
	I1004 04:25:45.810840   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.810852   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:45.810859   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:45.810913   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:45.848916   67282 cri.go:89] found id: ""
	I1004 04:25:45.848942   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.848951   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:45.848956   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:45.849014   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:45.886737   67282 cri.go:89] found id: ""
	I1004 04:25:45.886763   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.886772   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:45.886778   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:45.886825   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:45.922263   67282 cri.go:89] found id: ""
	I1004 04:25:45.922291   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.922301   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:45.922307   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:45.922364   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:45.956688   67282 cri.go:89] found id: ""
	I1004 04:25:45.956710   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.956718   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:45.956725   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:45.956737   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:46.007334   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:46.007365   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:46.020892   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:46.020916   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:46.089786   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:46.089809   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:46.089822   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:46.175987   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:46.176017   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:48.718354   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:48.733291   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:48.733347   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:48.769149   67282 cri.go:89] found id: ""
	I1004 04:25:48.769175   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.769185   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:48.769193   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:48.769249   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:48.804386   67282 cri.go:89] found id: ""
	I1004 04:25:48.804410   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.804418   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:48.804423   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:48.804467   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:48.841747   67282 cri.go:89] found id: ""
	I1004 04:25:48.841774   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.841782   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:48.841788   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:48.841836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:48.880025   67282 cri.go:89] found id: ""
	I1004 04:25:48.880048   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.880058   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:48.880064   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:48.880121   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:48.916506   67282 cri.go:89] found id: ""
	I1004 04:25:48.916530   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.916540   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:48.916547   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:48.916607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:48.952082   67282 cri.go:89] found id: ""
	I1004 04:25:48.952105   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.952116   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:48.952122   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:48.952177   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:48.986097   67282 cri.go:89] found id: ""
	I1004 04:25:48.986124   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.986135   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:48.986143   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:48.986210   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:49.020400   67282 cri.go:89] found id: ""
	I1004 04:25:49.020428   67282 logs.go:282] 0 containers: []
	W1004 04:25:49.020436   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:49.020445   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:49.020462   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:49.074724   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:49.074754   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:49.088504   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:49.088529   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:49.165940   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:49.165961   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:49.165972   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:49.244482   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:49.244519   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:51.786086   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:51.800644   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:51.800720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:51.839951   67282 cri.go:89] found id: ""
	I1004 04:25:51.839980   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.839990   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:51.839997   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:51.840055   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:51.878660   67282 cri.go:89] found id: ""
	I1004 04:25:51.878684   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.878695   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:51.878701   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:51.878762   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:51.916640   67282 cri.go:89] found id: ""
	I1004 04:25:51.916665   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.916672   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:51.916678   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:51.916725   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:51.953800   67282 cri.go:89] found id: ""
	I1004 04:25:51.953827   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.953835   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:51.953840   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:51.953897   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:51.993107   67282 cri.go:89] found id: ""
	I1004 04:25:51.993139   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.993150   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:51.993157   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:51.993214   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:52.027426   67282 cri.go:89] found id: ""
	I1004 04:25:52.027454   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.027464   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:52.027470   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:52.027521   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:52.063608   67282 cri.go:89] found id: ""
	I1004 04:25:52.063638   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.063650   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:52.063657   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:52.063717   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:52.100052   67282 cri.go:89] found id: ""
	I1004 04:25:52.100083   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.100094   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:52.100106   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:52.100125   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:52.113801   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:52.113827   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:52.201284   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:52.201311   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:52.201322   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:52.280014   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:52.280047   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:52.318120   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:52.318145   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:54.872245   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:54.886914   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:54.886990   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:54.927117   67282 cri.go:89] found id: ""
	I1004 04:25:54.927144   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.927152   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:54.927157   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:54.927205   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:54.962510   67282 cri.go:89] found id: ""
	I1004 04:25:54.962540   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.962552   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:54.962559   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:54.962619   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:54.996812   67282 cri.go:89] found id: ""
	I1004 04:25:54.996839   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.996848   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:54.996854   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:54.996905   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:55.034557   67282 cri.go:89] found id: ""
	I1004 04:25:55.034587   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.034597   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:55.034605   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:55.034667   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:55.072383   67282 cri.go:89] found id: ""
	I1004 04:25:55.072416   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.072427   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:55.072434   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:55.072494   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:55.121561   67282 cri.go:89] found id: ""
	I1004 04:25:55.121588   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.121598   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:55.121604   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:55.121775   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:55.165525   67282 cri.go:89] found id: ""
	I1004 04:25:55.165553   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.165564   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:55.165570   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:55.165627   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:55.201808   67282 cri.go:89] found id: ""
	I1004 04:25:55.201836   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.201846   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:55.201857   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:55.201870   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:55.280889   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:55.280917   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:55.280932   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:55.354979   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:55.355012   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:55.397144   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:55.397174   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:55.448710   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:55.448746   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:57.963840   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:57.977027   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:57.977085   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:58.019244   67282 cri.go:89] found id: ""
	I1004 04:25:58.019273   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.019285   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:58.019293   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:58.019351   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:58.057979   67282 cri.go:89] found id: ""
	I1004 04:25:58.058008   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.058018   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:58.058027   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:58.058084   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:58.094607   67282 cri.go:89] found id: ""
	I1004 04:25:58.094639   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.094652   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:58.094658   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:58.094726   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:58.130150   67282 cri.go:89] found id: ""
	I1004 04:25:58.130177   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.130188   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:58.130196   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:58.130259   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:58.167662   67282 cri.go:89] found id: ""
	I1004 04:25:58.167691   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.167701   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:58.167709   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:58.167769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:58.203480   67282 cri.go:89] found id: ""
	I1004 04:25:58.203568   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.203585   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:58.203594   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:58.203662   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:58.239516   67282 cri.go:89] found id: ""
	I1004 04:25:58.239537   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.239545   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:58.239551   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:58.239595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:58.275525   67282 cri.go:89] found id: ""
	I1004 04:25:58.275553   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.275564   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:58.275574   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:58.275587   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:58.331191   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:58.331224   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:58.345629   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:58.345659   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:58.416297   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:58.416315   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:58.416326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:58.490659   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:58.490694   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:01.030058   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:01.044568   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:01.044659   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:01.082652   67282 cri.go:89] found id: ""
	I1004 04:26:01.082679   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.082688   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:01.082694   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:01.082750   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:01.120781   67282 cri.go:89] found id: ""
	I1004 04:26:01.120805   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.120814   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:01.120821   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:01.120878   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:01.159494   67282 cri.go:89] found id: ""
	I1004 04:26:01.159523   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.159531   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:01.159537   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:01.159584   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:01.195482   67282 cri.go:89] found id: ""
	I1004 04:26:01.195512   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.195521   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:01.195529   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:01.195589   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:01.233971   67282 cri.go:89] found id: ""
	I1004 04:26:01.233996   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.234006   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:01.234014   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:01.234076   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:01.275935   67282 cri.go:89] found id: ""
	I1004 04:26:01.275958   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.275966   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:01.275971   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:01.276018   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:01.315512   67282 cri.go:89] found id: ""
	I1004 04:26:01.315535   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.315543   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:01.315548   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:01.315603   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:01.356465   67282 cri.go:89] found id: ""
	I1004 04:26:01.356491   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.356505   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:01.356513   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:01.356523   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:01.409237   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:01.409280   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:01.423426   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:01.423453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:01.501372   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:01.501397   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:01.501413   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:01.591087   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:01.591131   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:04.152506   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:04.166847   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:04.166911   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:04.203138   67282 cri.go:89] found id: ""
	I1004 04:26:04.203167   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.203177   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:04.203184   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:04.203243   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:04.237427   67282 cri.go:89] found id: ""
	I1004 04:26:04.237453   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.237464   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:04.237471   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:04.237525   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:04.272468   67282 cri.go:89] found id: ""
	I1004 04:26:04.272499   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.272511   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:04.272518   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:04.272584   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:04.307347   67282 cri.go:89] found id: ""
	I1004 04:26:04.307373   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.307384   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:04.307390   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:04.307448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:04.342450   67282 cri.go:89] found id: ""
	I1004 04:26:04.342487   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.342498   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:04.342506   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:04.342568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:04.382846   67282 cri.go:89] found id: ""
	I1004 04:26:04.382874   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.382885   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:04.382893   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:04.382945   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:04.418234   67282 cri.go:89] found id: ""
	I1004 04:26:04.418260   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.418268   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:04.418273   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:04.418328   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:04.453433   67282 cri.go:89] found id: ""
	I1004 04:26:04.453456   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.453464   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:04.453473   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:04.453487   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:04.502093   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:04.502123   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:04.515865   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:04.515897   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:04.595672   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:04.595698   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:04.595713   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:04.675273   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:04.675304   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:07.214965   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:07.229495   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:07.229568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:07.268541   67282 cri.go:89] found id: ""
	I1004 04:26:07.268580   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.268591   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:07.268599   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:07.268662   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:07.321382   67282 cri.go:89] found id: ""
	I1004 04:26:07.321414   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.321424   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:07.321431   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:07.321490   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:07.379840   67282 cri.go:89] found id: ""
	I1004 04:26:07.379869   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.379878   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:07.379884   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:07.379928   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:07.431304   67282 cri.go:89] found id: ""
	I1004 04:26:07.431333   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.431343   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:07.431349   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:07.431407   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:07.466853   67282 cri.go:89] found id: ""
	I1004 04:26:07.466880   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.466888   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:07.466893   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:07.466951   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:07.501587   67282 cri.go:89] found id: ""
	I1004 04:26:07.501613   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.501624   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:07.501630   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:07.501685   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:07.536326   67282 cri.go:89] found id: ""
	I1004 04:26:07.536354   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.536364   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:07.536371   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:07.536426   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:07.575257   67282 cri.go:89] found id: ""
	I1004 04:26:07.575283   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.575292   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:07.575299   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:07.575310   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:07.629477   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:07.629515   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:07.643294   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:07.643326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:07.720324   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:07.720350   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:07.720365   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:07.797641   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:07.797678   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:10.339392   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:10.353341   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:10.353397   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:10.391023   67282 cri.go:89] found id: ""
	I1004 04:26:10.391049   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.391059   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:10.391066   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:10.391129   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:10.424345   67282 cri.go:89] found id: ""
	I1004 04:26:10.424376   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.424388   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:10.424396   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:10.424466   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:10.459344   67282 cri.go:89] found id: ""
	I1004 04:26:10.459374   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.459387   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:10.459394   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:10.459451   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:10.494898   67282 cri.go:89] found id: ""
	I1004 04:26:10.494921   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.494929   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:10.494935   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:10.494982   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:10.531084   67282 cri.go:89] found id: ""
	I1004 04:26:10.531111   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.531122   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:10.531129   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:10.531185   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:10.566918   67282 cri.go:89] found id: ""
	I1004 04:26:10.566949   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.566960   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:10.566967   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:10.567024   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:10.604888   67282 cri.go:89] found id: ""
	I1004 04:26:10.604923   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.604935   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:10.604942   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:10.605013   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:10.641578   67282 cri.go:89] found id: ""
	I1004 04:26:10.641606   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.641620   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:10.641631   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:10.641648   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:10.696848   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:10.696882   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:10.710393   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:10.710417   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:10.780854   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:10.780881   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:10.780895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:10.861732   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:10.861771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:13.403231   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:13.417246   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:13.417319   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:13.451581   67282 cri.go:89] found id: ""
	I1004 04:26:13.451607   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.451616   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:13.451621   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:13.451681   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:13.488362   67282 cri.go:89] found id: ""
	I1004 04:26:13.488388   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.488396   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:13.488401   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:13.488449   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:13.522697   67282 cri.go:89] found id: ""
	I1004 04:26:13.522729   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.522740   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:13.522751   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:13.522803   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:13.564926   67282 cri.go:89] found id: ""
	I1004 04:26:13.564959   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.564972   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:13.564981   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:13.565058   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:13.600582   67282 cri.go:89] found id: ""
	I1004 04:26:13.600612   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.600622   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:13.600630   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:13.600688   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:13.634550   67282 cri.go:89] found id: ""
	I1004 04:26:13.634575   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.634584   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:13.634591   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:13.634646   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:13.669281   67282 cri.go:89] found id: ""
	I1004 04:26:13.669311   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.669320   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:13.669326   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:13.669388   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:13.707664   67282 cri.go:89] found id: ""
	I1004 04:26:13.707693   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.707703   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:13.707713   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:13.707727   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:13.721127   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:13.721168   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:13.788026   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:13.788051   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:13.788067   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:13.864505   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:13.864542   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:13.902896   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:13.902921   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:16.456813   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:16.470071   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:16.470138   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:16.506085   67282 cri.go:89] found id: ""
	I1004 04:26:16.506114   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.506125   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:16.506133   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:16.506189   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:16.540016   67282 cri.go:89] found id: ""
	I1004 04:26:16.540044   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.540052   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:16.540056   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:16.540100   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:16.579247   67282 cri.go:89] found id: ""
	I1004 04:26:16.579272   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.579280   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:16.579285   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:16.579332   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:16.615552   67282 cri.go:89] found id: ""
	I1004 04:26:16.615579   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.615601   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:16.615621   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:16.615675   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:16.652639   67282 cri.go:89] found id: ""
	I1004 04:26:16.652660   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.652671   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:16.652678   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:16.652732   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:16.689607   67282 cri.go:89] found id: ""
	I1004 04:26:16.689631   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.689643   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:16.689650   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:16.689720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:16.724430   67282 cri.go:89] found id: ""
	I1004 04:26:16.724458   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.724469   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:16.724475   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:16.724534   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:16.758378   67282 cri.go:89] found id: ""
	I1004 04:26:16.758412   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.758423   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:16.758434   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:16.758454   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:16.826234   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:16.826259   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:16.826273   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:16.906908   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:16.906945   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:16.950295   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:16.950321   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:17.002216   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:17.002253   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:19.516253   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:19.529664   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:19.529726   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:19.566669   67282 cri.go:89] found id: ""
	I1004 04:26:19.566700   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.566711   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:19.566718   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:19.566772   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:19.605923   67282 cri.go:89] found id: ""
	I1004 04:26:19.605951   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.605961   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:19.605968   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:19.606025   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:19.645132   67282 cri.go:89] found id: ""
	I1004 04:26:19.645158   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.645168   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:19.645175   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:19.645235   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:19.687135   67282 cri.go:89] found id: ""
	I1004 04:26:19.687160   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.687171   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:19.687178   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:19.687256   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:19.724180   67282 cri.go:89] found id: ""
	I1004 04:26:19.724213   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.724224   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:19.724230   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:19.724295   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:19.761608   67282 cri.go:89] found id: ""
	I1004 04:26:19.761638   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.761649   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:19.761656   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:19.761714   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:19.795060   67282 cri.go:89] found id: ""
	I1004 04:26:19.795089   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.795099   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:19.795106   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:19.795164   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:19.835678   67282 cri.go:89] found id: ""
	I1004 04:26:19.835703   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.835712   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:19.835722   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:19.835736   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:19.889508   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:19.889543   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:19.903206   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:19.903233   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:19.973445   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:19.973471   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:19.973485   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:20.053996   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:20.054034   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:22.594171   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:22.609084   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:22.609145   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:22.650423   67282 cri.go:89] found id: ""
	I1004 04:26:22.650449   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.650459   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:22.650466   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:22.650525   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:22.686420   67282 cri.go:89] found id: ""
	I1004 04:26:22.686450   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.686461   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:22.686469   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:22.686535   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:22.721385   67282 cri.go:89] found id: ""
	I1004 04:26:22.721408   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.721416   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:22.721421   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:22.721484   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:22.765461   67282 cri.go:89] found id: ""
	I1004 04:26:22.765492   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.765504   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:22.765511   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:22.765569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:22.798192   67282 cri.go:89] found id: ""
	I1004 04:26:22.798220   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.798230   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:22.798235   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:22.798293   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:22.833110   67282 cri.go:89] found id: ""
	I1004 04:26:22.833138   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.833147   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:22.833153   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:22.833212   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:22.875653   67282 cri.go:89] found id: ""
	I1004 04:26:22.875684   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.875696   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:22.875704   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:22.875766   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:22.913906   67282 cri.go:89] found id: ""
	I1004 04:26:22.913931   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.913938   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:22.913946   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:22.913957   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:22.969480   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:22.969511   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:22.983475   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:22.983500   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:23.059953   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:23.059982   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:23.059996   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:23.139106   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:23.139134   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:25.678489   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:25.692648   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:25.692705   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:25.728232   67282 cri.go:89] found id: ""
	I1004 04:26:25.728261   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.728269   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:25.728276   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:25.728335   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:25.763956   67282 cri.go:89] found id: ""
	I1004 04:26:25.763982   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.763991   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:25.763998   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:25.764057   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:25.799715   67282 cri.go:89] found id: ""
	I1004 04:26:25.799743   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.799753   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:25.799761   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:25.799840   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:25.834823   67282 cri.go:89] found id: ""
	I1004 04:26:25.834855   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.834866   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:25.834873   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:25.834933   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:25.869194   67282 cri.go:89] found id: ""
	I1004 04:26:25.869224   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.869235   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:25.869242   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:25.869303   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:25.903514   67282 cri.go:89] found id: ""
	I1004 04:26:25.903543   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.903553   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:25.903558   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:25.903606   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:25.939887   67282 cri.go:89] found id: ""
	I1004 04:26:25.939919   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.939930   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:25.939938   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:25.939996   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:25.981922   67282 cri.go:89] found id: ""
	I1004 04:26:25.981944   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.981952   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:25.981960   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:25.981971   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:26.064860   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:26.064891   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:26.105272   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:26.105296   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:26.162602   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:26.162640   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:26.176408   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:26.176439   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:26.242264   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:28.742417   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:28.755655   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:28.755723   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:28.789338   67282 cri.go:89] found id: ""
	I1004 04:26:28.789361   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.789369   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:28.789374   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:28.789420   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:28.823513   67282 cri.go:89] found id: ""
	I1004 04:26:28.823544   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.823555   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:28.823562   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:28.823619   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:28.858826   67282 cri.go:89] found id: ""
	I1004 04:26:28.858854   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.858866   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:28.858873   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:28.858927   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:28.892552   67282 cri.go:89] found id: ""
	I1004 04:26:28.892579   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.892587   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:28.892593   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:28.892639   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:28.929250   67282 cri.go:89] found id: ""
	I1004 04:26:28.929277   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.929284   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:28.929289   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:28.929335   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:28.966554   67282 cri.go:89] found id: ""
	I1004 04:26:28.966581   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.966589   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:28.966594   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:28.966642   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:28.999930   67282 cri.go:89] found id: ""
	I1004 04:26:28.999954   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.999964   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:28.999970   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:29.000025   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:29.033687   67282 cri.go:89] found id: ""
	I1004 04:26:29.033717   67282 logs.go:282] 0 containers: []
	W1004 04:26:29.033727   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:29.033737   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:29.033752   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:29.109486   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:29.109523   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:29.149125   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:29.149152   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:29.197830   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:29.197861   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:29.211182   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:29.211204   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:29.276808   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:31.777659   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:31.791374   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:31.791425   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:31.825453   67282 cri.go:89] found id: ""
	I1004 04:26:31.825480   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.825489   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:31.825495   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:31.825553   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:31.857845   67282 cri.go:89] found id: ""
	I1004 04:26:31.857875   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.857884   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:31.857893   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:31.857949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:31.892282   67282 cri.go:89] found id: ""
	I1004 04:26:31.892309   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.892317   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:31.892322   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:31.892366   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:31.926016   67282 cri.go:89] found id: ""
	I1004 04:26:31.926037   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.926045   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:31.926051   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:31.926094   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:31.961382   67282 cri.go:89] found id: ""
	I1004 04:26:31.961415   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.961425   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:31.961433   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:31.961492   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:31.994570   67282 cri.go:89] found id: ""
	I1004 04:26:31.994602   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.994613   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:31.994620   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:31.994675   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:32.027359   67282 cri.go:89] found id: ""
	I1004 04:26:32.027383   67282 logs.go:282] 0 containers: []
	W1004 04:26:32.027391   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:32.027397   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:32.027448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:32.063518   67282 cri.go:89] found id: ""
	I1004 04:26:32.063545   67282 logs.go:282] 0 containers: []
	W1004 04:26:32.063555   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:32.063565   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:32.063577   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:32.151555   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:32.151582   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:32.190678   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:32.190700   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:32.243567   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:32.243596   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:32.256293   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:32.256320   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:32.329513   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:34.830126   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:34.844760   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:34.844833   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:34.878409   67282 cri.go:89] found id: ""
	I1004 04:26:34.878433   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.878440   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:34.878445   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:34.878500   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:34.916493   67282 cri.go:89] found id: ""
	I1004 04:26:34.916516   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.916524   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:34.916532   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:34.916577   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:34.954532   67282 cri.go:89] found id: ""
	I1004 04:26:34.954556   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.954565   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:34.954570   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:34.954616   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:34.987163   67282 cri.go:89] found id: ""
	I1004 04:26:34.987190   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.987198   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:34.987205   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:34.987261   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:35.021351   67282 cri.go:89] found id: ""
	I1004 04:26:35.021379   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.021388   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:35.021394   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:35.021452   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:35.056350   67282 cri.go:89] found id: ""
	I1004 04:26:35.056376   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.056384   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:35.056390   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:35.056448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:35.093375   67282 cri.go:89] found id: ""
	I1004 04:26:35.093402   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.093412   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:35.093420   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:35.093486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:35.130509   67282 cri.go:89] found id: ""
	I1004 04:26:35.130532   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.130541   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:35.130549   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:35.130562   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:35.188138   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:35.188174   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:35.202226   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:35.202267   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:35.276652   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:35.276675   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:35.276688   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:35.357339   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:35.357373   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:37.898166   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:37.911319   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:37.911387   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:37.944551   67282 cri.go:89] found id: ""
	I1004 04:26:37.944578   67282 logs.go:282] 0 containers: []
	W1004 04:26:37.944590   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:37.944597   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:37.944652   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:37.978066   67282 cri.go:89] found id: ""
	I1004 04:26:37.978093   67282 logs.go:282] 0 containers: []
	W1004 04:26:37.978101   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:37.978107   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:37.978163   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:38.011065   67282 cri.go:89] found id: ""
	I1004 04:26:38.011095   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.011104   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:38.011109   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:38.011156   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:38.050323   67282 cri.go:89] found id: ""
	I1004 04:26:38.050349   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.050359   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:38.050366   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:38.050425   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:38.089141   67282 cri.go:89] found id: ""
	I1004 04:26:38.089169   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.089177   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:38.089182   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:38.089258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:38.122625   67282 cri.go:89] found id: ""
	I1004 04:26:38.122653   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.122663   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:38.122671   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:38.122719   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:38.159957   67282 cri.go:89] found id: ""
	I1004 04:26:38.159982   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.159990   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:38.159996   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:38.160085   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:38.194592   67282 cri.go:89] found id: ""
	I1004 04:26:38.194618   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.194626   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:38.194646   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:38.194657   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:38.263914   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:38.263945   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:38.263958   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:38.339864   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:38.339895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:38.375477   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:38.375505   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:38.428292   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:38.428320   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:40.941910   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:40.955041   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:40.955117   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:40.991278   67282 cri.go:89] found id: ""
	I1004 04:26:40.991307   67282 logs.go:282] 0 containers: []
	W1004 04:26:40.991317   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:40.991325   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:40.991389   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:41.025347   67282 cri.go:89] found id: ""
	I1004 04:26:41.025373   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.025385   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:41.025392   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:41.025450   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:41.060974   67282 cri.go:89] found id: ""
	I1004 04:26:41.061001   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.061019   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:41.061026   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:41.061087   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:41.097557   67282 cri.go:89] found id: ""
	I1004 04:26:41.097587   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.097598   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:41.097605   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:41.097665   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:41.136371   67282 cri.go:89] found id: ""
	I1004 04:26:41.136396   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.136405   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:41.136412   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:41.136472   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:41.172590   67282 cri.go:89] found id: ""
	I1004 04:26:41.172617   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.172627   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:41.172634   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:41.172687   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:41.209124   67282 cri.go:89] found id: ""
	I1004 04:26:41.209146   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.209154   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:41.209159   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:41.209214   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:41.250654   67282 cri.go:89] found id: ""
	I1004 04:26:41.250687   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.250699   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:41.250709   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:41.250723   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:41.305814   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:41.305864   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:41.322961   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:41.322989   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:41.427611   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:41.427632   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:41.427648   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:41.505830   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:41.505877   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:44.050902   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:44.065277   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:44.065343   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:44.101089   67282 cri.go:89] found id: ""
	I1004 04:26:44.101110   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.101117   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:44.101123   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:44.101174   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:44.138570   67282 cri.go:89] found id: ""
	I1004 04:26:44.138593   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.138601   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:44.138606   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:44.138650   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:44.178423   67282 cri.go:89] found id: ""
	I1004 04:26:44.178456   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.178478   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:44.178486   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:44.178556   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:44.213301   67282 cri.go:89] found id: ""
	I1004 04:26:44.213330   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.213338   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:44.213344   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:44.213401   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:44.247653   67282 cri.go:89] found id: ""
	I1004 04:26:44.247681   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.247688   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:44.247694   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:44.247756   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:44.281667   67282 cri.go:89] found id: ""
	I1004 04:26:44.281693   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.281704   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:44.281711   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:44.281767   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:44.314637   67282 cri.go:89] found id: ""
	I1004 04:26:44.314667   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.314677   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:44.314684   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:44.314760   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:44.349432   67282 cri.go:89] found id: ""
	I1004 04:26:44.349459   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.349469   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:44.349479   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:44.349492   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:44.397134   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:44.397168   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:44.410708   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:44.410738   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:44.482025   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:44.482049   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:44.482065   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:44.562652   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:44.562699   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:47.101459   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:47.116923   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:47.117020   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:47.153495   67282 cri.go:89] found id: ""
	I1004 04:26:47.153524   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.153534   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:47.153541   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:47.153601   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:47.189976   67282 cri.go:89] found id: ""
	I1004 04:26:47.190004   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.190014   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:47.190023   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:47.190084   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:47.225712   67282 cri.go:89] found id: ""
	I1004 04:26:47.225740   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.225748   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:47.225754   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:47.225800   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:47.261565   67282 cri.go:89] found id: ""
	I1004 04:26:47.261593   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.261603   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:47.261608   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:47.261665   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:47.298152   67282 cri.go:89] found id: ""
	I1004 04:26:47.298204   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.298214   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:47.298223   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:47.298279   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:47.338226   67282 cri.go:89] found id: ""
	I1004 04:26:47.338253   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.338261   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:47.338267   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:47.338320   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:47.378859   67282 cri.go:89] found id: ""
	I1004 04:26:47.378892   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.378902   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:47.378909   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:47.378964   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:47.418161   67282 cri.go:89] found id: ""
	I1004 04:26:47.418186   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.418194   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:47.418203   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:47.418213   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:47.470271   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:47.470311   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:47.484416   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:47.484453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:47.556744   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:47.556767   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:47.556778   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:47.634266   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:47.634299   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:50.175746   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:50.191850   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:50.191945   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:50.229542   67282 cri.go:89] found id: ""
	I1004 04:26:50.229574   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.229584   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:50.229593   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:50.229655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:50.268401   67282 cri.go:89] found id: ""
	I1004 04:26:50.268432   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.268441   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:50.268449   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:50.268522   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:50.302927   67282 cri.go:89] found id: ""
	I1004 04:26:50.302954   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.302964   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:50.302969   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:50.303029   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:50.336617   67282 cri.go:89] found id: ""
	I1004 04:26:50.336646   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.336656   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:50.336663   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:50.336724   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:50.372871   67282 cri.go:89] found id: ""
	I1004 04:26:50.372901   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.372911   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:50.372918   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:50.372977   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:50.409601   67282 cri.go:89] found id: ""
	I1004 04:26:50.409629   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.409640   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:50.409648   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:50.409723   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:50.451899   67282 cri.go:89] found id: ""
	I1004 04:26:50.451927   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.451935   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:50.451940   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:50.451991   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:50.487306   67282 cri.go:89] found id: ""
	I1004 04:26:50.487332   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.487343   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:50.487353   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:50.487369   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:50.565167   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:50.565192   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:50.565207   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:50.646155   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:50.646194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:50.688459   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:50.688489   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:50.742416   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:50.742460   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:53.257063   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:53.270546   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:53.270618   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:53.306504   67282 cri.go:89] found id: ""
	I1004 04:26:53.306530   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.306538   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:53.306544   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:53.306594   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:53.343256   67282 cri.go:89] found id: ""
	I1004 04:26:53.343285   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.343293   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:53.343299   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:53.343352   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:53.380834   67282 cri.go:89] found id: ""
	I1004 04:26:53.380864   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.380873   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:53.380880   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:53.380940   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:53.417361   67282 cri.go:89] found id: ""
	I1004 04:26:53.417391   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.417404   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:53.417415   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:53.417479   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:53.451948   67282 cri.go:89] found id: ""
	I1004 04:26:53.451970   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.451978   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:53.451983   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:53.452039   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:53.487731   67282 cri.go:89] found id: ""
	I1004 04:26:53.487756   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.487764   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:53.487769   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:53.487836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:53.531549   67282 cri.go:89] found id: ""
	I1004 04:26:53.531573   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.531582   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:53.531587   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:53.531643   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:53.578123   67282 cri.go:89] found id: ""
	I1004 04:26:53.578151   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.578162   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:53.578180   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:53.578195   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:53.643062   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:53.643093   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:53.696157   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:53.696194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:53.709884   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:53.709910   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:53.791272   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:53.791297   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:53.791314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:56.371608   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:56.386293   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:56.386376   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:56.425531   67282 cri.go:89] found id: ""
	I1004 04:26:56.425560   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.425571   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:56.425578   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:56.425646   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:56.470293   67282 cri.go:89] found id: ""
	I1004 04:26:56.470326   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.470335   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:56.470340   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:56.470400   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:56.508927   67282 cri.go:89] found id: ""
	I1004 04:26:56.508955   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.508963   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:56.508968   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:56.509018   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:56.549149   67282 cri.go:89] found id: ""
	I1004 04:26:56.549178   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.549191   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:56.549199   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:56.549270   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:56.589412   67282 cri.go:89] found id: ""
	I1004 04:26:56.589441   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.589451   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:56.589459   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:56.589517   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:56.624732   67282 cri.go:89] found id: ""
	I1004 04:26:56.624760   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.624770   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:56.624776   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:56.624838   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:56.662385   67282 cri.go:89] found id: ""
	I1004 04:26:56.662413   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.662421   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:56.662427   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:56.662483   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:56.697982   67282 cri.go:89] found id: ""
	I1004 04:26:56.698014   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.698025   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:56.698036   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:56.698049   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:56.750597   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:56.750633   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:56.764884   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:56.764921   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:56.844404   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:56.844433   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:56.844451   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:56.924373   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:56.924406   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:59.466449   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:59.481897   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:59.481972   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:59.535384   67282 cri.go:89] found id: ""
	I1004 04:26:59.535411   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.535422   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:59.535428   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:59.535486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:59.595843   67282 cri.go:89] found id: ""
	I1004 04:26:59.595875   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.595886   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:59.595894   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:59.595954   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:59.641010   67282 cri.go:89] found id: ""
	I1004 04:26:59.641041   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.641049   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:59.641057   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:59.641102   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:59.679705   67282 cri.go:89] found id: ""
	I1004 04:26:59.679736   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.679746   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:59.679753   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:59.679828   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:59.715960   67282 cri.go:89] found id: ""
	I1004 04:26:59.715985   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.715993   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:59.715998   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:59.716047   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:59.757406   67282 cri.go:89] found id: ""
	I1004 04:26:59.757442   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.757453   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:59.757461   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:59.757528   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:59.792038   67282 cri.go:89] found id: ""
	I1004 04:26:59.792066   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.792076   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:59.792083   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:59.792141   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:59.830258   67282 cri.go:89] found id: ""
	I1004 04:26:59.830281   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.830289   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:59.830296   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:59.830308   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:59.877273   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:59.877304   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:59.932570   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:59.932610   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:59.945896   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:59.945919   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:00.020363   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:00.020392   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:00.020412   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:02.601022   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:02.615039   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:02.615112   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:02.654541   67282 cri.go:89] found id: ""
	I1004 04:27:02.654567   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.654574   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:02.654579   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:02.654638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:02.691313   67282 cri.go:89] found id: ""
	I1004 04:27:02.691338   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.691349   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:02.691355   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:02.691414   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:02.735337   67282 cri.go:89] found id: ""
	I1004 04:27:02.735367   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.735376   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:02.735383   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:02.735486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:02.769604   67282 cri.go:89] found id: ""
	I1004 04:27:02.769628   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.769638   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:02.769643   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:02.769704   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:02.812913   67282 cri.go:89] found id: ""
	I1004 04:27:02.812938   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.812949   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:02.812954   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:02.813020   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:02.849910   67282 cri.go:89] found id: ""
	I1004 04:27:02.849939   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.849949   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:02.849956   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:02.850023   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:02.889467   67282 cri.go:89] found id: ""
	I1004 04:27:02.889497   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.889509   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:02.889517   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:02.889575   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:02.928508   67282 cri.go:89] found id: ""
	I1004 04:27:02.928529   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.928537   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:02.928545   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:02.928556   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:02.942783   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:02.942821   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:03.018282   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:03.018304   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:03.018314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:03.101588   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:03.101622   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:03.149911   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:03.149937   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:05.703125   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:05.717243   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:05.717303   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:05.752564   67282 cri.go:89] found id: ""
	I1004 04:27:05.752588   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.752597   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:05.752609   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:05.752656   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:05.786955   67282 cri.go:89] found id: ""
	I1004 04:27:05.786983   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.786994   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:05.787001   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:05.787073   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:05.823848   67282 cri.go:89] found id: ""
	I1004 04:27:05.823882   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.823893   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:05.823901   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:05.823970   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:05.866192   67282 cri.go:89] found id: ""
	I1004 04:27:05.866220   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.866238   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:05.866246   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:05.866305   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:05.904051   67282 cri.go:89] found id: ""
	I1004 04:27:05.904078   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.904089   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:05.904096   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:05.904154   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:05.940041   67282 cri.go:89] found id: ""
	I1004 04:27:05.940075   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.940085   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:05.940092   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:05.940158   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:05.975758   67282 cri.go:89] found id: ""
	I1004 04:27:05.975799   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.975810   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:05.975818   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:05.975892   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:06.011044   67282 cri.go:89] found id: ""
	I1004 04:27:06.011086   67282 logs.go:282] 0 containers: []
	W1004 04:27:06.011096   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:06.011105   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:06.011116   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:06.024900   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:06.024937   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:06.109932   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:06.109960   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:06.109976   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:06.189517   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:06.189557   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:06.230019   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:06.230048   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:08.785355   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:08.799156   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:08.799218   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:08.843606   67282 cri.go:89] found id: ""
	I1004 04:27:08.843634   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.843643   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:08.843648   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:08.843698   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:08.884418   67282 cri.go:89] found id: ""
	I1004 04:27:08.884443   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.884450   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:08.884456   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:08.884503   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:08.925878   67282 cri.go:89] found id: ""
	I1004 04:27:08.925906   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.925914   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:08.925920   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:08.925970   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:08.966127   67282 cri.go:89] found id: ""
	I1004 04:27:08.966157   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.966167   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:08.966173   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:08.966227   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:09.010646   67282 cri.go:89] found id: ""
	I1004 04:27:09.010672   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.010682   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:09.010702   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:09.010769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:09.049738   67282 cri.go:89] found id: ""
	I1004 04:27:09.049761   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.049768   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:09.049774   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:09.049825   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:09.082709   67282 cri.go:89] found id: ""
	I1004 04:27:09.082739   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.082747   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:09.082752   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:09.082808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:09.120574   67282 cri.go:89] found id: ""
	I1004 04:27:09.120605   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.120617   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:09.120626   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:09.120636   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:09.202880   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:09.202922   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:09.242668   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:09.242700   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:09.298662   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:09.298703   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:09.314832   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:09.314868   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:09.389062   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:11.889645   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:11.902953   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:11.903012   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:11.939846   67282 cri.go:89] found id: ""
	I1004 04:27:11.939874   67282 logs.go:282] 0 containers: []
	W1004 04:27:11.939882   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:11.939888   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:11.939936   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:11.975281   67282 cri.go:89] found id: ""
	I1004 04:27:11.975303   67282 logs.go:282] 0 containers: []
	W1004 04:27:11.975311   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:11.975317   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:11.975370   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:12.011400   67282 cri.go:89] found id: ""
	I1004 04:27:12.011428   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.011438   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:12.011443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:12.011506   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:12.046862   67282 cri.go:89] found id: ""
	I1004 04:27:12.046889   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.046898   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:12.046905   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:12.046960   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:12.081537   67282 cri.go:89] found id: ""
	I1004 04:27:12.081569   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.081581   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:12.081590   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:12.081655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:12.121982   67282 cri.go:89] found id: ""
	I1004 04:27:12.122010   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.122021   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:12.122028   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:12.122086   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:12.161419   67282 cri.go:89] found id: ""
	I1004 04:27:12.161460   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.161473   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:12.161481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:12.161549   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:12.202188   67282 cri.go:89] found id: ""
	I1004 04:27:12.202230   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.202242   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:12.202253   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:12.202267   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:12.253424   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:12.253462   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:12.268116   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:12.268141   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:12.337788   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:12.337814   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:12.337826   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:12.417359   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:12.417395   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:14.959596   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:14.973031   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:14.973090   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:15.011451   67282 cri.go:89] found id: ""
	I1004 04:27:15.011487   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.011497   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:15.011513   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:15.011572   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:15.055767   67282 cri.go:89] found id: ""
	I1004 04:27:15.055817   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.055829   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:15.055836   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:15.055915   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:15.096357   67282 cri.go:89] found id: ""
	I1004 04:27:15.096385   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.096394   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:15.096399   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:15.096456   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:15.131824   67282 cri.go:89] found id: ""
	I1004 04:27:15.131853   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.131863   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:15.131870   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:15.131932   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:15.169250   67282 cri.go:89] found id: ""
	I1004 04:27:15.169285   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.169299   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:15.169307   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:15.169373   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:15.206852   67282 cri.go:89] found id: ""
	I1004 04:27:15.206881   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.206889   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:15.206895   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:15.206949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:15.241392   67282 cri.go:89] found id: ""
	I1004 04:27:15.241421   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.241431   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:15.241439   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:15.241498   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:15.280697   67282 cri.go:89] found id: ""
	I1004 04:27:15.280723   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.280734   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:15.280744   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:15.280758   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:15.361681   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:15.361716   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:15.404640   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:15.404676   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:15.457287   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:15.457326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:15.471162   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:15.471188   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:15.544157   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:18.045094   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:18.060228   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:18.060310   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:18.096659   67282 cri.go:89] found id: ""
	I1004 04:27:18.096688   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.096697   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:18.096703   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:18.096757   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:18.135538   67282 cri.go:89] found id: ""
	I1004 04:27:18.135565   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.135573   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:18.135579   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:18.135629   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:18.171051   67282 cri.go:89] found id: ""
	I1004 04:27:18.171082   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.171098   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:18.171106   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:18.171168   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:18.205696   67282 cri.go:89] found id: ""
	I1004 04:27:18.205725   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.205735   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:18.205742   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:18.205803   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:18.240545   67282 cri.go:89] found id: ""
	I1004 04:27:18.240566   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.240576   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:18.240584   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:18.240638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:18.279185   67282 cri.go:89] found id: ""
	I1004 04:27:18.279221   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.279232   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:18.279239   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:18.279310   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:18.318395   67282 cri.go:89] found id: ""
	I1004 04:27:18.318417   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.318424   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:18.318430   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:18.318476   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:18.352367   67282 cri.go:89] found id: ""
	I1004 04:27:18.352390   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.352398   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:18.352407   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:18.352420   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:18.365604   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:18.365637   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:18.438407   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:18.438427   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:18.438438   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:18.513645   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:18.513679   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:18.557224   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:18.557250   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:21.111005   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:21.126573   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:21.126631   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:21.161161   67282 cri.go:89] found id: ""
	I1004 04:27:21.161190   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.161201   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:21.161207   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:21.161258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:21.199517   67282 cri.go:89] found id: ""
	I1004 04:27:21.199544   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.199555   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:21.199562   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:21.199625   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:21.236210   67282 cri.go:89] found id: ""
	I1004 04:27:21.236238   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.236246   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:21.236251   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:21.236311   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:21.272720   67282 cri.go:89] found id: ""
	I1004 04:27:21.272746   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.272753   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:21.272759   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:21.272808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:21.311439   67282 cri.go:89] found id: ""
	I1004 04:27:21.311474   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.311484   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:21.311491   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:21.311551   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:21.360400   67282 cri.go:89] found id: ""
	I1004 04:27:21.360427   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.360436   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:21.360443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:21.360511   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:21.394627   67282 cri.go:89] found id: ""
	I1004 04:27:21.394656   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.394667   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:21.394673   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:21.394721   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:21.429736   67282 cri.go:89] found id: ""
	I1004 04:27:21.429762   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.429770   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:21.429778   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:21.429789   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:21.482773   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:21.482808   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:21.497570   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:21.497595   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:21.582335   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:21.582355   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:21.582367   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:21.662196   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:21.662230   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:24.205743   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:24.222878   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:24.222951   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:24.263410   67282 cri.go:89] found id: ""
	I1004 04:27:24.263450   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.263462   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:24.263469   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:24.263532   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:24.306892   67282 cri.go:89] found id: ""
	I1004 04:27:24.306923   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.306934   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:24.306941   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:24.307008   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:24.345522   67282 cri.go:89] found id: ""
	I1004 04:27:24.345559   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.345571   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:24.345579   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:24.345638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:24.384893   67282 cri.go:89] found id: ""
	I1004 04:27:24.384918   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.384925   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:24.384931   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:24.384978   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:24.420998   67282 cri.go:89] found id: ""
	I1004 04:27:24.421025   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.421036   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:24.421043   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:24.421105   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:24.456277   67282 cri.go:89] found id: ""
	I1004 04:27:24.456305   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.456315   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:24.456322   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:24.456383   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:24.497852   67282 cri.go:89] found id: ""
	I1004 04:27:24.497881   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.497892   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:24.497900   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:24.497960   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:24.538702   67282 cri.go:89] found id: ""
	I1004 04:27:24.538736   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.538755   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:24.538766   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:24.538778   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:24.553747   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:24.553773   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:24.638059   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:24.638081   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:24.638093   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:24.718165   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:24.718212   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:24.759770   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:24.759811   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:27.311684   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:27.327493   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:27.327570   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:27.362804   67282 cri.go:89] found id: ""
	I1004 04:27:27.362827   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.362836   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:27.362841   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:27.362888   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:27.401576   67282 cri.go:89] found id: ""
	I1004 04:27:27.401604   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.401614   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:27.401621   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:27.401682   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:27.445152   67282 cri.go:89] found id: ""
	I1004 04:27:27.445177   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.445187   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:27.445193   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:27.445240   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:27.482710   67282 cri.go:89] found id: ""
	I1004 04:27:27.482734   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.482742   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:27.482749   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:27.482808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:27.519459   67282 cri.go:89] found id: ""
	I1004 04:27:27.519488   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.519498   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:27.519505   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:27.519569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:27.559381   67282 cri.go:89] found id: ""
	I1004 04:27:27.559407   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.559417   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:27.559423   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:27.559468   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:27.609040   67282 cri.go:89] found id: ""
	I1004 04:27:27.609068   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.609076   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:27.609081   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:27.609128   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:27.654537   67282 cri.go:89] found id: ""
	I1004 04:27:27.654569   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.654579   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:27.654590   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:27.654603   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:27.709062   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:27.709098   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:27.722931   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:27.722955   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:27.796863   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:27.796884   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:27.796895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:27.879840   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:27.879876   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:30.423644   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:30.439256   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:30.439311   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:30.479612   67282 cri.go:89] found id: ""
	I1004 04:27:30.479640   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.479648   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:30.479654   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:30.479750   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:30.522846   67282 cri.go:89] found id: ""
	I1004 04:27:30.522879   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.522890   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:30.522898   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:30.522946   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:30.558935   67282 cri.go:89] found id: ""
	I1004 04:27:30.558962   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.558971   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:30.558976   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:30.559032   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:30.603383   67282 cri.go:89] found id: ""
	I1004 04:27:30.603411   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.603421   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:30.603428   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:30.603492   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:30.644700   67282 cri.go:89] found id: ""
	I1004 04:27:30.644727   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.644737   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:30.644744   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:30.644799   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:30.680328   67282 cri.go:89] found id: ""
	I1004 04:27:30.680358   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.680367   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:30.680372   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:30.680419   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:30.717973   67282 cri.go:89] found id: ""
	I1004 04:27:30.717995   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.718005   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:30.718021   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:30.718082   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:30.755838   67282 cri.go:89] found id: ""
	I1004 04:27:30.755866   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.755874   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:30.755882   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:30.755893   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:30.809999   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:30.810036   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:30.824447   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:30.824491   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:30.902008   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:30.902030   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:30.902043   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:30.986938   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:30.986984   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:33.531108   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:33.546681   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:33.546759   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:33.586444   67282 cri.go:89] found id: ""
	I1004 04:27:33.586469   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.586479   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:33.586486   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:33.586552   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:33.629340   67282 cri.go:89] found id: ""
	I1004 04:27:33.629365   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.629373   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:33.629378   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:33.629429   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:33.668446   67282 cri.go:89] found id: ""
	I1004 04:27:33.668473   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.668483   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:33.668490   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:33.668548   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:33.706287   67282 cri.go:89] found id: ""
	I1004 04:27:33.706312   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.706320   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:33.706327   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:33.706385   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:33.746161   67282 cri.go:89] found id: ""
	I1004 04:27:33.746189   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.746200   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:33.746207   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:33.746270   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:33.782157   67282 cri.go:89] found id: ""
	I1004 04:27:33.782184   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.782194   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:33.782200   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:33.782262   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:33.820332   67282 cri.go:89] found id: ""
	I1004 04:27:33.820361   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.820371   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:33.820378   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:33.820437   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:33.859431   67282 cri.go:89] found id: ""
	I1004 04:27:33.859458   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.859467   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:33.859475   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:33.859485   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:33.910259   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:33.910292   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:33.925149   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:33.925177   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:34.006153   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:34.006187   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:34.006202   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:34.115882   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:34.115916   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:36.662964   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:36.677071   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:36.677139   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:36.720785   67282 cri.go:89] found id: ""
	I1004 04:27:36.720807   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.720818   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:36.720826   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:36.720875   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:36.757535   67282 cri.go:89] found id: ""
	I1004 04:27:36.757563   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.757574   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:36.757582   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:36.757630   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:36.800989   67282 cri.go:89] found id: ""
	I1004 04:27:36.801024   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.801038   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:36.801046   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:36.801112   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:36.837101   67282 cri.go:89] found id: ""
	I1004 04:27:36.837122   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.837131   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:36.837136   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:36.837181   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:36.876325   67282 cri.go:89] found id: ""
	I1004 04:27:36.876358   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.876370   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:36.876379   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:36.876444   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:36.914720   67282 cri.go:89] found id: ""
	I1004 04:27:36.914749   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.914759   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:36.914767   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:36.914828   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:36.949672   67282 cri.go:89] found id: ""
	I1004 04:27:36.949694   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.949701   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:36.949706   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:36.949754   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:36.983374   67282 cri.go:89] found id: ""
	I1004 04:27:36.983406   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.983416   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:36.983427   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:36.983440   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:37.039040   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:37.039075   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:37.054873   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:37.054898   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:37.131537   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:37.131562   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:37.131577   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:37.213958   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:37.213990   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:39.754264   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:39.771465   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:39.771545   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:39.829530   67282 cri.go:89] found id: ""
	I1004 04:27:39.829560   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.829572   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:39.829580   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:39.829639   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:39.876055   67282 cri.go:89] found id: ""
	I1004 04:27:39.876078   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.876090   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:39.876095   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:39.876142   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:39.913304   67282 cri.go:89] found id: ""
	I1004 04:27:39.913327   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.913335   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:39.913340   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:39.913389   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:39.948821   67282 cri.go:89] found id: ""
	I1004 04:27:39.948847   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.948855   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:39.948862   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:39.948916   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:39.986994   67282 cri.go:89] found id: ""
	I1004 04:27:39.987023   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.987034   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:39.987041   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:39.987141   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:40.026627   67282 cri.go:89] found id: ""
	I1004 04:27:40.026656   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.026668   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:40.026675   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:40.026734   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:40.067028   67282 cri.go:89] found id: ""
	I1004 04:27:40.067068   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.067079   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:40.067086   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:40.067144   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:40.105638   67282 cri.go:89] found id: ""
	I1004 04:27:40.105667   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.105677   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:40.105694   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:40.105707   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:40.159425   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:40.159467   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:40.175045   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:40.175073   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:40.261967   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:40.261989   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:40.262002   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:40.345317   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:40.345354   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:42.888115   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:42.901889   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:42.901948   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:42.938556   67282 cri.go:89] found id: ""
	I1004 04:27:42.938587   67282 logs.go:282] 0 containers: []
	W1004 04:27:42.938597   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:42.938604   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:42.938668   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:42.974569   67282 cri.go:89] found id: ""
	I1004 04:27:42.974595   67282 logs.go:282] 0 containers: []
	W1004 04:27:42.974606   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:42.974613   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:42.974679   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:43.010552   67282 cri.go:89] found id: ""
	I1004 04:27:43.010581   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.010593   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:43.010600   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:43.010655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:43.046204   67282 cri.go:89] found id: ""
	I1004 04:27:43.046237   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.046247   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:43.046254   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:43.046313   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:43.081612   67282 cri.go:89] found id: ""
	I1004 04:27:43.081644   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.081655   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:43.081662   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:43.081729   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:43.121103   67282 cri.go:89] found id: ""
	I1004 04:27:43.121126   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.121133   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:43.121139   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:43.121191   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:43.157104   67282 cri.go:89] found id: ""
	I1004 04:27:43.157128   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.157136   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:43.157141   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:43.157196   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:43.198927   67282 cri.go:89] found id: ""
	I1004 04:27:43.198951   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.198958   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:43.198966   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:43.198975   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:43.254534   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:43.254563   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:43.268106   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:43.268130   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:43.344382   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:43.344410   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:43.344425   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:43.426916   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:43.426948   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:45.966806   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:45.980187   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:45.980252   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:46.014196   67282 cri.go:89] found id: ""
	I1004 04:27:46.014220   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.014228   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:46.014233   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:46.014295   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:46.053910   67282 cri.go:89] found id: ""
	I1004 04:27:46.053940   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.053951   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:46.053957   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:46.054013   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:46.087896   67282 cri.go:89] found id: ""
	I1004 04:27:46.087921   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.087930   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:46.087936   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:46.087985   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:46.123441   67282 cri.go:89] found id: ""
	I1004 04:27:46.123465   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.123475   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:46.123481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:46.123545   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:46.159664   67282 cri.go:89] found id: ""
	I1004 04:27:46.159688   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.159698   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:46.159704   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:46.159761   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:46.195474   67282 cri.go:89] found id: ""
	I1004 04:27:46.195501   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.195512   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:46.195525   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:46.195569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:46.228670   67282 cri.go:89] found id: ""
	I1004 04:27:46.228693   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.228701   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:46.228706   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:46.228759   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:46.265278   67282 cri.go:89] found id: ""
	I1004 04:27:46.265303   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.265311   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:46.265325   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:46.265338   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:46.315135   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:46.315163   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:46.327765   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:46.327797   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:46.393157   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:46.393173   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:46.393184   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:46.473026   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:46.473058   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:49.011972   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:49.025718   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:49.025783   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:49.062749   67282 cri.go:89] found id: ""
	I1004 04:27:49.062774   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.062782   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:49.062788   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:49.062844   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:49.100838   67282 cri.go:89] found id: ""
	I1004 04:27:49.100886   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.100897   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:49.100904   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:49.100961   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:49.139966   67282 cri.go:89] found id: ""
	I1004 04:27:49.139990   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.140000   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:49.140007   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:49.140088   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:49.179347   67282 cri.go:89] found id: ""
	I1004 04:27:49.179373   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.179384   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:49.179391   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:49.179435   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:49.218086   67282 cri.go:89] found id: ""
	I1004 04:27:49.218112   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.218121   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:49.218127   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:49.218181   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:49.254779   67282 cri.go:89] found id: ""
	I1004 04:27:49.254811   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.254823   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:49.254830   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:49.254888   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:49.287351   67282 cri.go:89] found id: ""
	I1004 04:27:49.287381   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.287392   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:49.287398   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:49.287456   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:49.320051   67282 cri.go:89] found id: ""
	I1004 04:27:49.320078   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.320089   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:49.320100   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:49.320112   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:49.371270   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:49.371300   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:49.384403   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:49.384432   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:49.468132   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:49.468154   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:49.468167   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:49.543179   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:49.543211   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:52.093235   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:52.108446   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:52.108520   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:52.147590   67282 cri.go:89] found id: ""
	I1004 04:27:52.147613   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.147620   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:52.147626   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:52.147677   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:52.183066   67282 cri.go:89] found id: ""
	I1004 04:27:52.183095   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.183105   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:52.183112   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:52.183170   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:52.223109   67282 cri.go:89] found id: ""
	I1004 04:27:52.223140   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.223154   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:52.223165   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:52.223223   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:52.259547   67282 cri.go:89] found id: ""
	I1004 04:27:52.259573   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.259582   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:52.259587   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:52.259638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:52.296934   67282 cri.go:89] found id: ""
	I1004 04:27:52.296961   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.296971   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:52.296979   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:52.297040   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:52.331650   67282 cri.go:89] found id: ""
	I1004 04:27:52.331671   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.331679   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:52.331684   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:52.331728   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:52.365111   67282 cri.go:89] found id: ""
	I1004 04:27:52.365139   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.365150   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:52.365157   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:52.365239   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:52.400974   67282 cri.go:89] found id: ""
	I1004 04:27:52.401010   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.401023   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:52.401035   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:52.401049   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:52.484732   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:52.484771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:52.523322   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:52.523348   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:52.576671   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:52.576702   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:52.590263   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:52.590291   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:52.666646   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:55.166856   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:55.181481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:55.181562   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:55.218023   67282 cri.go:89] found id: ""
	I1004 04:27:55.218048   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.218056   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:55.218063   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:55.218121   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:55.256439   67282 cri.go:89] found id: ""
	I1004 04:27:55.256464   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.256472   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:55.256477   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:55.256531   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:55.294563   67282 cri.go:89] found id: ""
	I1004 04:27:55.294588   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.294596   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:55.294601   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:55.294656   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:55.331266   67282 cri.go:89] found id: ""
	I1004 04:27:55.331290   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.331300   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:55.331306   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:55.331370   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:55.367286   67282 cri.go:89] found id: ""
	I1004 04:27:55.367314   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.367325   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:55.367332   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:55.367391   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:55.402031   67282 cri.go:89] found id: ""
	I1004 04:27:55.402054   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.402062   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:55.402068   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:55.402122   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:55.437737   67282 cri.go:89] found id: ""
	I1004 04:27:55.437764   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.437774   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:55.437780   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:55.437842   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:55.470654   67282 cri.go:89] found id: ""
	I1004 04:27:55.470692   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.470704   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:55.470713   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:55.470726   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:55.521364   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:55.521393   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:55.534691   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:55.534716   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:55.600902   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:55.600923   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:55.600933   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:55.678896   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:55.678940   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:58.220086   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:58.234049   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:58.234110   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:58.281112   67282 cri.go:89] found id: ""
	I1004 04:27:58.281135   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.281143   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:58.281148   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:58.281191   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:58.320549   67282 cri.go:89] found id: ""
	I1004 04:27:58.320575   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.320584   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:58.320589   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:58.320635   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:58.355139   67282 cri.go:89] found id: ""
	I1004 04:27:58.355166   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.355174   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:58.355179   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:58.355225   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:58.387809   67282 cri.go:89] found id: ""
	I1004 04:27:58.387836   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.387846   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:58.387851   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:58.387908   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:58.420264   67282 cri.go:89] found id: ""
	I1004 04:27:58.420287   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.420295   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:58.420300   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:58.420349   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:58.455409   67282 cri.go:89] found id: ""
	I1004 04:27:58.455431   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.455438   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:58.455443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:58.455487   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:58.488708   67282 cri.go:89] found id: ""
	I1004 04:27:58.488734   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.488742   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:58.488749   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:58.488797   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:58.522139   67282 cri.go:89] found id: ""
	I1004 04:27:58.522161   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.522169   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:58.522176   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:58.522187   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:58.604653   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:58.604683   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:58.645141   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:58.645169   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:58.699716   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:58.699748   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:58.713197   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:58.713228   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:58.781998   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:01.282429   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:01.297266   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:01.297343   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:01.330421   67282 cri.go:89] found id: ""
	I1004 04:28:01.330446   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.330454   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:28:01.330459   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:01.330514   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:01.366960   67282 cri.go:89] found id: ""
	I1004 04:28:01.366983   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.366992   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:28:01.366998   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:01.367067   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:01.400886   67282 cri.go:89] found id: ""
	I1004 04:28:01.400910   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.400920   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:28:01.400931   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:01.400987   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:01.435556   67282 cri.go:89] found id: ""
	I1004 04:28:01.435586   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.435594   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:28:01.435601   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:01.435649   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:01.475772   67282 cri.go:89] found id: ""
	I1004 04:28:01.475810   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.475820   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:28:01.475826   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:01.475884   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:01.512380   67282 cri.go:89] found id: ""
	I1004 04:28:01.512403   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.512411   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:28:01.512417   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:01.512465   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:01.550488   67282 cri.go:89] found id: ""
	I1004 04:28:01.550517   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.550528   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:01.550536   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:28:01.550595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:28:01.586216   67282 cri.go:89] found id: ""
	I1004 04:28:01.586249   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.586261   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:28:01.586271   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:01.586285   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:01.640819   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:01.640860   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:01.656990   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:01.657020   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:28:01.731326   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:01.731354   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:01.731368   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:01.810007   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:28:01.810044   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:04.352648   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:04.366150   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:04.366227   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:04.403272   67282 cri.go:89] found id: ""
	I1004 04:28:04.403298   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.403308   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:28:04.403315   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:04.403371   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:04.439237   67282 cri.go:89] found id: ""
	I1004 04:28:04.439269   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.439280   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:28:04.439287   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:04.439345   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:04.475532   67282 cri.go:89] found id: ""
	I1004 04:28:04.475558   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.475569   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:28:04.475576   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:04.475638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:04.511738   67282 cri.go:89] found id: ""
	I1004 04:28:04.511765   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.511775   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:28:04.511792   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:04.511850   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:04.553536   67282 cri.go:89] found id: ""
	I1004 04:28:04.553561   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.553568   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:28:04.553574   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:04.553625   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:04.589016   67282 cri.go:89] found id: ""
	I1004 04:28:04.589044   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.589053   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:28:04.589058   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:04.589106   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:04.622780   67282 cri.go:89] found id: ""
	I1004 04:28:04.622808   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.622817   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:04.622823   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:28:04.622879   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:28:04.662620   67282 cri.go:89] found id: ""
	I1004 04:28:04.662641   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.662649   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:28:04.662659   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:04.662669   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:04.717894   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:04.717928   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:04.732353   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:04.732385   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:28:04.806443   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:04.806469   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:04.806492   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:04.887684   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:28:04.887717   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:07.426630   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:07.440242   67282 kubeadm.go:597] duration metric: took 4m3.475062199s to restartPrimaryControlPlane
	W1004 04:28:07.440318   67282 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1004 04:28:07.440346   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:28:08.147532   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:08.162175   67282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:28:08.172013   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:28:08.181741   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:28:08.181757   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:28:08.181801   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:28:08.191002   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:28:08.191046   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:28:08.200929   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:28:08.210241   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:28:08.210286   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:28:08.219693   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:28:08.229497   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:28:08.229534   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:28:08.239583   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:28:08.249207   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:28:08.249252   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:28:08.258516   67282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:28:08.328054   67282 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:28:08.328132   67282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:28:08.472265   67282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:28:08.472420   67282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:28:08.472543   67282 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:28:08.655873   67282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:28:08.657726   67282 out.go:235]   - Generating certificates and keys ...
	I1004 04:28:08.657817   67282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:28:08.657876   67282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:28:08.657942   67282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:28:08.658034   67282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:28:08.658149   67282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:28:08.658235   67282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:28:08.658309   67282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:28:08.658396   67282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:28:08.658503   67282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:28:08.658600   67282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:28:08.658651   67282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:28:08.658707   67282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:28:08.706486   67282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:28:08.909036   67282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:28:09.285968   67282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:28:09.499963   67282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:28:09.516914   67282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:28:09.517832   67282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:28:09.517900   67282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:28:09.664925   67282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:28:09.666691   67282 out.go:235]   - Booting up control plane ...
	I1004 04:28:09.666889   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:28:09.671298   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:28:09.672046   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:28:09.672956   67282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:28:09.685069   67282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:28:49.686881   67282 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:28:49.687234   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:28:49.687487   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:28:54.687773   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:28:54.688026   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:29:04.688599   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:29:04.688808   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:29:24.690241   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:29:24.690419   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:04.692816   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:04.693091   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:04.693114   67282 kubeadm.go:310] 
	I1004 04:30:04.693149   67282 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:30:04.693214   67282 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:30:04.693236   67282 kubeadm.go:310] 
	I1004 04:30:04.693295   67282 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:30:04.693327   67282 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:30:04.693451   67282 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:30:04.693460   67282 kubeadm.go:310] 
	I1004 04:30:04.693568   67282 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:30:04.693614   67282 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:30:04.693668   67282 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:30:04.693688   67282 kubeadm.go:310] 
	I1004 04:30:04.693843   67282 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:30:04.693966   67282 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:30:04.693982   67282 kubeadm.go:310] 
	I1004 04:30:04.694097   67282 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:30:04.694218   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:30:04.694305   67282 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:30:04.694387   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:30:04.694399   67282 kubeadm.go:310] 
	I1004 04:30:04.695379   67282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:30:04.695478   67282 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:30:04.695566   67282 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1004 04:30:04.695695   67282 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1004 04:30:04.695742   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:30:05.153635   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:30:05.170057   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:30:05.179541   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:30:05.179563   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:30:05.179611   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:30:05.188969   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:30:05.189025   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:30:05.198049   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:30:05.207031   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:30:05.207118   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:30:05.216934   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:30:05.226477   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:30:05.226541   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:30:05.236222   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:30:05.245314   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:30:05.245374   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:30:05.255762   67282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:30:05.329816   67282 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:30:05.329953   67282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:30:05.482342   67282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:30:05.482549   67282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:30:05.482692   67282 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:30:05.666400   67282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:30:05.668115   67282 out.go:235]   - Generating certificates and keys ...
	I1004 04:30:05.668217   67282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:30:05.668319   67282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:30:05.668460   67282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:30:05.668562   67282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:30:05.668660   67282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:30:05.668734   67282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:30:05.668823   67282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:30:05.668905   67282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:30:05.669010   67282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:30:05.669130   67282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:30:05.669186   67282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:30:05.669269   67282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:30:05.773446   67282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:30:05.823736   67282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:30:05.951294   67282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:30:06.250340   67282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:30:06.275797   67282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:30:06.276877   67282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:30:06.276944   67282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:30:06.437286   67282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:30:06.438849   67282 out.go:235]   - Booting up control plane ...
	I1004 04:30:06.438952   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:30:06.443688   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:30:06.444596   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:30:06.445267   67282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:30:06.457334   67282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:30:46.456706   67282 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:30:46.456854   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:46.457117   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:51.456986   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:51.457240   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:31:01.457062   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:31:01.457288   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:31:21.456976   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:31:21.457277   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:32:01.456978   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:32:01.457225   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:32:01.457249   67282 kubeadm.go:310] 
	I1004 04:32:01.457312   67282 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:32:01.457374   67282 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:32:01.457383   67282 kubeadm.go:310] 
	I1004 04:32:01.457434   67282 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:32:01.457512   67282 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:32:01.457678   67282 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:32:01.457692   67282 kubeadm.go:310] 
	I1004 04:32:01.457838   67282 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:32:01.457892   67282 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:32:01.457946   67282 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:32:01.457957   67282 kubeadm.go:310] 
	I1004 04:32:01.458102   67282 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:32:01.458217   67282 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:32:01.458233   67282 kubeadm.go:310] 
	I1004 04:32:01.458379   67282 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:32:01.458494   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:32:01.458604   67282 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:32:01.458699   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:32:01.458710   67282 kubeadm.go:310] 
	I1004 04:32:01.459157   67282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:32:01.459272   67282 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:32:01.459386   67282 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1004 04:32:01.459464   67282 kubeadm.go:394] duration metric: took 7m57.553695137s to StartCluster
	I1004 04:32:01.459522   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:32:01.459586   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:32:01.500997   67282 cri.go:89] found id: ""
	I1004 04:32:01.501026   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.501037   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:32:01.501044   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:32:01.501102   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:32:01.537240   67282 cri.go:89] found id: ""
	I1004 04:32:01.537276   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.537288   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:32:01.537295   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:32:01.537349   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:32:01.573959   67282 cri.go:89] found id: ""
	I1004 04:32:01.573995   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.574007   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:32:01.574013   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:32:01.574074   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:32:01.610614   67282 cri.go:89] found id: ""
	I1004 04:32:01.610645   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.610657   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:32:01.610665   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:32:01.610716   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:32:01.645520   67282 cri.go:89] found id: ""
	I1004 04:32:01.645554   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.645567   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:32:01.645574   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:32:01.645640   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:32:01.679787   67282 cri.go:89] found id: ""
	I1004 04:32:01.679814   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.679823   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:32:01.679828   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:32:01.679873   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:32:01.714860   67282 cri.go:89] found id: ""
	I1004 04:32:01.714883   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.714891   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:32:01.714897   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:32:01.714952   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:32:01.761170   67282 cri.go:89] found id: ""
	I1004 04:32:01.761198   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.761208   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:32:01.761220   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:32:01.761232   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:32:01.822966   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:32:01.823006   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:32:01.839482   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:32:01.839510   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:32:01.917863   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:32:01.917887   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:32:01.917901   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:32:02.027216   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:32:02.027247   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1004 04:32:02.069804   67282 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1004 04:32:02.069852   67282 out.go:270] * 
	* 
	W1004 04:32:02.069922   67282 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:32:02.069939   67282 out.go:270] * 
	* 
	W1004 04:32:02.070740   67282 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 04:32:02.074308   67282 out.go:201] 
	W1004 04:32:02.075387   67282 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:32:02.075427   67282 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1004 04:32:02.075458   67282 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1004 04:32:02.076675   67282 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-420062 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062: exit status 2 (231.443962ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-420062 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-420062 logs -n 25: (1.53910789s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-934812            | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-617497             | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617497                  | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617497 --memory=2200 --alsologtostderr   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-617497 image list                           | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:18 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658545                  | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281471  | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-420062        | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-934812                 | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:19 UTC | 04 Oct 24 04:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-420062             | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281471       | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC | 04 Oct 24 04:28 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 04:21:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 04:21:23.276574   67541 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:21:23.276701   67541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:21:23.276710   67541 out.go:358] Setting ErrFile to fd 2...
	I1004 04:21:23.276715   67541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:21:23.276893   67541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:21:23.277439   67541 out.go:352] Setting JSON to false
	I1004 04:21:23.278387   67541 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7428,"bootTime":1728008255,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:21:23.278482   67541 start.go:139] virtualization: kvm guest
	I1004 04:21:23.280571   67541 out.go:177] * [default-k8s-diff-port-281471] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:21:23.282033   67541 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:21:23.282063   67541 notify.go:220] Checking for updates...
	I1004 04:21:23.284454   67541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:21:23.285843   67541 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:21:23.287026   67541 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:21:23.288328   67541 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:21:23.289544   67541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:21:23.291321   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:21:23.291979   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:21:23.292059   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:21:23.306995   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I1004 04:21:23.307440   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:21:23.308080   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:21:23.308106   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:21:23.308442   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:21:23.308642   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:21:23.308893   67541 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:21:23.309208   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:21:23.309280   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:21:23.323807   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I1004 04:21:23.324281   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:21:23.324777   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:21:23.324797   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:21:23.325085   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:21:23.325248   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:21:23.359916   67541 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 04:21:23.361482   67541 start.go:297] selected driver: kvm2
	I1004 04:21:23.361504   67541 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:21:23.361657   67541 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:21:23.362533   67541 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:21:23.362621   67541 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:21:23.378088   67541 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:21:23.378515   67541 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:21:23.378547   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:21:23.378591   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:21:23.378627   67541 start.go:340] cluster config:
	{Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:21:23.378727   67541 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:21:23.380705   67541 out.go:177] * Starting "default-k8s-diff-port-281471" primary control-plane node in "default-k8s-diff-port-281471" cluster
	I1004 04:21:20.068102   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:23.140106   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:23.381986   67541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:21:23.382036   67541 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 04:21:23.382048   67541 cache.go:56] Caching tarball of preloaded images
	I1004 04:21:23.382125   67541 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:21:23.382135   67541 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 04:21:23.382254   67541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/config.json ...
	I1004 04:21:23.382433   67541 start.go:360] acquireMachinesLock for default-k8s-diff-port-281471: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:21:29.220163   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:32.292105   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:38.372080   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:41.444091   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:47.524103   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:50.596091   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:56.676086   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:59.748055   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:05.828125   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:08.900042   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:14.980094   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:18.052114   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:24.132087   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:27.204139   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:33.284040   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:36.356076   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:42.436190   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:45.508075   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:51.588061   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:54.660042   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:00.740141   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:03.812099   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:09.892076   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:12.964133   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:15.968919   66755 start.go:364] duration metric: took 4m6.72532498s to acquireMachinesLock for "embed-certs-934812"
	I1004 04:23:15.968984   66755 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:15.968992   66755 fix.go:54] fixHost starting: 
	I1004 04:23:15.969309   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:15.969356   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:15.984739   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I1004 04:23:15.985214   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:15.985743   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:23:15.985769   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:15.986104   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:15.986289   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:15.986449   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:23:15.988237   66755 fix.go:112] recreateIfNeeded on embed-certs-934812: state=Stopped err=<nil>
	I1004 04:23:15.988263   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	W1004 04:23:15.988415   66755 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:15.990473   66755 out.go:177] * Restarting existing kvm2 VM for "embed-certs-934812" ...
	I1004 04:23:15.965929   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:15.965974   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:23:15.966321   66293 buildroot.go:166] provisioning hostname "no-preload-658545"
	I1004 04:23:15.966348   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:23:15.966530   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:23:15.968760   66293 machine.go:96] duration metric: took 4m37.423316886s to provisionDockerMachine
	I1004 04:23:15.968806   66293 fix.go:56] duration metric: took 4m37.446149084s for fixHost
	I1004 04:23:15.968814   66293 start.go:83] releasing machines lock for "no-preload-658545", held for 4m37.446179902s
	W1004 04:23:15.968836   66293 start.go:714] error starting host: provision: host is not running
	W1004 04:23:15.968935   66293 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1004 04:23:15.968946   66293 start.go:729] Will try again in 5 seconds ...
	I1004 04:23:15.991914   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Start
	I1004 04:23:15.992106   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring networks are active...
	I1004 04:23:15.992995   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring network default is active
	I1004 04:23:15.993392   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring network mk-embed-certs-934812 is active
	I1004 04:23:15.993728   66755 main.go:141] libmachine: (embed-certs-934812) Getting domain xml...
	I1004 04:23:15.994410   66755 main.go:141] libmachine: (embed-certs-934812) Creating domain...
	I1004 04:23:17.232262   66755 main.go:141] libmachine: (embed-certs-934812) Waiting to get IP...
	I1004 04:23:17.233339   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.233793   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.233879   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.233797   67957 retry.go:31] will retry after 221.075745ms: waiting for machine to come up
	I1004 04:23:17.456413   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.456917   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.456941   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.456869   67957 retry.go:31] will retry after 354.386237ms: waiting for machine to come up
	I1004 04:23:17.812523   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.812949   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.812973   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.812905   67957 retry.go:31] will retry after 338.999517ms: waiting for machine to come up
	I1004 04:23:18.153589   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:18.154029   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:18.154056   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:18.153987   67957 retry.go:31] will retry after 555.533205ms: waiting for machine to come up
	I1004 04:23:18.710680   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:18.711155   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:18.711181   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:18.711104   67957 retry.go:31] will retry after 733.812197ms: waiting for machine to come up
	I1004 04:23:20.970507   66293 start.go:360] acquireMachinesLock for no-preload-658545: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:23:19.447202   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:19.447644   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:19.447671   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:19.447600   67957 retry.go:31] will retry after 575.303848ms: waiting for machine to come up
	I1004 04:23:20.024465   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:20.024788   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:20.024819   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:20.024735   67957 retry.go:31] will retry after 894.593683ms: waiting for machine to come up
	I1004 04:23:20.920880   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:20.921499   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:20.921522   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:20.921480   67957 retry.go:31] will retry after 924.978895ms: waiting for machine to come up
	I1004 04:23:21.848064   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:21.848498   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:21.848619   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:21.848550   67957 retry.go:31] will retry after 1.554806984s: waiting for machine to come up
	I1004 04:23:23.404569   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:23.404936   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:23.404964   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:23.404884   67957 retry.go:31] will retry after 1.700496318s: waiting for machine to come up
	I1004 04:23:25.106988   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:25.107410   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:25.107441   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:25.107351   67957 retry.go:31] will retry after 1.913555474s: waiting for machine to come up
	I1004 04:23:27.022672   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:27.023134   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:27.023161   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:27.023096   67957 retry.go:31] will retry after 3.208946613s: waiting for machine to come up
	I1004 04:23:30.235462   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:30.235910   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:30.235942   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:30.235868   67957 retry.go:31] will retry after 3.125545279s: waiting for machine to come up
	I1004 04:23:33.364563   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.365007   66755 main.go:141] libmachine: (embed-certs-934812) Found IP for machine: 192.168.61.74
	I1004 04:23:33.365031   66755 main.go:141] libmachine: (embed-certs-934812) Reserving static IP address...
	I1004 04:23:33.365047   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has current primary IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.365595   66755 main.go:141] libmachine: (embed-certs-934812) Reserved static IP address: 192.168.61.74
	I1004 04:23:33.365628   66755 main.go:141] libmachine: (embed-certs-934812) Waiting for SSH to be available...
	I1004 04:23:33.365648   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "embed-certs-934812", mac: "52:54:00:25:fb:50", ip: "192.168.61.74"} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.365667   66755 main.go:141] libmachine: (embed-certs-934812) DBG | skip adding static IP to network mk-embed-certs-934812 - found existing host DHCP lease matching {name: "embed-certs-934812", mac: "52:54:00:25:fb:50", ip: "192.168.61.74"}
	I1004 04:23:33.365682   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Getting to WaitForSSH function...
	I1004 04:23:33.367835   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.368155   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.368185   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.368297   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Using SSH client type: external
	I1004 04:23:33.368322   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa (-rw-------)
	I1004 04:23:33.368359   66755 main.go:141] libmachine: (embed-certs-934812) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:23:33.368369   66755 main.go:141] libmachine: (embed-certs-934812) DBG | About to run SSH command:
	I1004 04:23:33.368377   66755 main.go:141] libmachine: (embed-certs-934812) DBG | exit 0
	I1004 04:23:33.496067   66755 main.go:141] libmachine: (embed-certs-934812) DBG | SSH cmd err, output: <nil>: 
	I1004 04:23:33.496559   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetConfigRaw
	I1004 04:23:33.497310   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:33.500858   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.501360   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.501403   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.501750   66755 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/config.json ...
	I1004 04:23:33.502058   66755 machine.go:93] provisionDockerMachine start ...
	I1004 04:23:33.502084   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:33.502303   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.505899   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.506442   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.506475   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.506686   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.506947   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.507165   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.507324   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.507541   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.507744   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.507757   66755 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:23:33.624518   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:23:33.624547   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.624795   66755 buildroot.go:166] provisioning hostname "embed-certs-934812"
	I1004 04:23:33.624826   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.625021   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.627597   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.627916   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.627948   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.628115   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.628312   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.628444   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.628608   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.628785   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.629023   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.629040   66755 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-934812 && echo "embed-certs-934812" | sudo tee /etc/hostname
	I1004 04:23:33.758642   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-934812
	
	I1004 04:23:33.758681   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.761325   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.761654   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.761696   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.761849   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.762034   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.762164   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.762297   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.762426   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.762636   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.762652   66755 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-934812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-934812/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-934812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:23:33.889571   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:33.889601   66755 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:23:33.889642   66755 buildroot.go:174] setting up certificates
	I1004 04:23:33.889654   66755 provision.go:84] configureAuth start
	I1004 04:23:33.889681   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.889992   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:33.892657   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.893063   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.893087   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.893310   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.895770   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.896126   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.896162   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.896328   66755 provision.go:143] copyHostCerts
	I1004 04:23:33.896397   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:23:33.896408   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:23:33.896472   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:23:33.896565   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:23:33.896573   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:23:33.896595   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:23:33.896652   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:23:33.896659   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:23:33.896678   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:23:33.896724   66755 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.embed-certs-934812 san=[127.0.0.1 192.168.61.74 embed-certs-934812 localhost minikube]
	I1004 04:23:33.997867   66755 provision.go:177] copyRemoteCerts
	I1004 04:23:33.997923   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:23:33.997950   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.001050   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.001422   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.001461   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.001733   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.001961   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.002125   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.002246   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.090823   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:23:34.116934   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1004 04:23:34.669084   67282 start.go:364] duration metric: took 2m46.052475725s to acquireMachinesLock for "old-k8s-version-420062"
	I1004 04:23:34.669158   67282 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:34.669168   67282 fix.go:54] fixHost starting: 
	I1004 04:23:34.669584   67282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:34.669640   67282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:34.686790   67282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I1004 04:23:34.687312   67282 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:34.687829   67282 main.go:141] libmachine: Using API Version  1
	I1004 04:23:34.687857   67282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:34.688238   67282 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:34.688415   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:34.688579   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetState
	I1004 04:23:34.690288   67282 fix.go:112] recreateIfNeeded on old-k8s-version-420062: state=Stopped err=<nil>
	I1004 04:23:34.690326   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	W1004 04:23:34.690467   67282 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:34.692283   67282 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-420062" ...
	I1004 04:23:34.143763   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 04:23:34.168897   66755 provision.go:87] duration metric: took 279.227966ms to configureAuth
	I1004 04:23:34.168929   66755 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:23:34.169096   66755 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:23:34.169168   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.171638   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.171952   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.171977   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.172178   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.172349   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.172503   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.172594   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.172717   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:34.172924   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:34.172943   66755 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:23:34.411661   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:23:34.411690   66755 machine.go:96] duration metric: took 909.61315ms to provisionDockerMachine
	I1004 04:23:34.411703   66755 start.go:293] postStartSetup for "embed-certs-934812" (driver="kvm2")
	I1004 04:23:34.411716   66755 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:23:34.411734   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.412070   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:23:34.412099   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.415246   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.415583   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.415643   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.415802   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.415997   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.416170   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.416322   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.507385   66755 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:23:34.511963   66755 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:23:34.511990   66755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:23:34.512064   66755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:23:34.512152   66755 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:23:34.512270   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:23:34.522375   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:34.547860   66755 start.go:296] duration metric: took 136.143527ms for postStartSetup
	I1004 04:23:34.547904   66755 fix.go:56] duration metric: took 18.578910472s for fixHost
	I1004 04:23:34.547931   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.550715   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.551031   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.551067   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.551194   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.551391   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.551568   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.551724   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.551903   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:34.552055   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:34.552064   66755 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:23:34.668944   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015814.641353752
	
	I1004 04:23:34.668966   66755 fix.go:216] guest clock: 1728015814.641353752
	I1004 04:23:34.668974   66755 fix.go:229] Guest: 2024-10-04 04:23:34.641353752 +0000 UTC Remote: 2024-10-04 04:23:34.547909289 +0000 UTC m=+265.449211021 (delta=93.444463ms)
	I1004 04:23:34.668993   66755 fix.go:200] guest clock delta is within tolerance: 93.444463ms
	I1004 04:23:34.668999   66755 start.go:83] releasing machines lock for "embed-certs-934812", held for 18.70003051s
	I1004 04:23:34.669024   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.669299   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:34.672346   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.672757   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.672796   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.672966   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673609   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673816   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673940   66755 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:23:34.673982   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.674020   66755 ssh_runner.go:195] Run: cat /version.json
	I1004 04:23:34.674043   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.676934   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677085   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677379   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.677406   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677449   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.677480   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677560   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.677677   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.677758   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.677811   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.677873   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.677928   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.677979   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.678022   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.761509   66755 ssh_runner.go:195] Run: systemctl --version
	I1004 04:23:34.784487   66755 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:23:34.934037   66755 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:23:34.942569   66755 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:23:34.942642   66755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:23:34.960164   66755 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:23:34.960197   66755 start.go:495] detecting cgroup driver to use...
	I1004 04:23:34.960276   66755 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:23:34.979195   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:23:34.994660   66755 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:23:34.994747   66755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:23:35.011209   66755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:23:35.031746   66755 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:23:35.146164   66755 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:23:35.287092   66755 docker.go:233] disabling docker service ...
	I1004 04:23:35.287167   66755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:23:35.308007   66755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:23:35.323235   66755 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:23:35.473583   66755 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:23:35.610098   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:23:35.624276   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:23:35.643810   66755 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:23:35.643873   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.655804   66755 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:23:35.655875   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.668260   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.679770   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.692649   66755 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:23:35.704364   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.715539   66755 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.739272   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.754538   66755 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:23:35.766476   66755 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:23:35.766566   66755 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:23:35.781677   66755 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:23:35.792640   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:35.910787   66755 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:23:36.015877   66755 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:23:36.015948   66755 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:23:36.021573   66755 start.go:563] Will wait 60s for crictl version
	I1004 04:23:36.021642   66755 ssh_runner.go:195] Run: which crictl
	I1004 04:23:36.025605   66755 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:23:36.064644   66755 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:23:36.064714   66755 ssh_runner.go:195] Run: crio --version
	I1004 04:23:36.094751   66755 ssh_runner.go:195] Run: crio --version
	I1004 04:23:36.127213   66755 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:23:34.693590   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .Start
	I1004 04:23:34.693792   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring networks are active...
	I1004 04:23:34.694582   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network default is active
	I1004 04:23:34.694917   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network mk-old-k8s-version-420062 is active
	I1004 04:23:34.695322   67282 main.go:141] libmachine: (old-k8s-version-420062) Getting domain xml...
	I1004 04:23:34.696052   67282 main.go:141] libmachine: (old-k8s-version-420062) Creating domain...
	I1004 04:23:35.995511   67282 main.go:141] libmachine: (old-k8s-version-420062) Waiting to get IP...
	I1004 04:23:35.996465   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:35.996962   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:35.997031   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:35.996923   68093 retry.go:31] will retry after 296.620059ms: waiting for machine to come up
	I1004 04:23:36.295737   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:36.296226   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:36.296257   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:36.296182   68093 retry.go:31] will retry after 311.736827ms: waiting for machine to come up
	I1004 04:23:36.610158   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:36.610804   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:36.610829   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:36.610759   68093 retry.go:31] will retry after 440.646496ms: waiting for machine to come up
	I1004 04:23:37.053487   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:37.053956   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:37.053981   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:37.053923   68093 retry.go:31] will retry after 550.190101ms: waiting for machine to come up
	I1004 04:23:37.605404   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:37.605775   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:37.605815   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:37.605743   68093 retry.go:31] will retry after 721.648529ms: waiting for machine to come up
	I1004 04:23:38.328819   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:38.329323   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:38.329362   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:38.329281   68093 retry.go:31] will retry after 825.234448ms: waiting for machine to come up
	I1004 04:23:36.128549   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:36.131439   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:36.131827   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:36.131856   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:36.132054   66755 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1004 04:23:36.136650   66755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:36.149563   66755 kubeadm.go:883] updating cluster {Name:embed-certs-934812 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:23:36.149691   66755 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:23:36.149738   66755 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:36.188235   66755 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:23:36.188316   66755 ssh_runner.go:195] Run: which lz4
	I1004 04:23:36.192619   66755 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:23:36.196876   66755 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:23:36.196909   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:23:37.711672   66755 crio.go:462] duration metric: took 1.519102092s to copy over tarball
	I1004 04:23:37.711752   66755 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:23:39.155736   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:39.156199   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:39.156229   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:39.156150   68093 retry.go:31] will retry after 970.793402ms: waiting for machine to come up
	I1004 04:23:40.128963   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:40.129454   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:40.129507   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:40.129419   68093 retry.go:31] will retry after 1.460395601s: waiting for machine to come up
	I1004 04:23:41.592145   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:41.592653   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:41.592677   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:41.592600   68093 retry.go:31] will retry after 1.397092356s: waiting for machine to come up
	I1004 04:23:42.992176   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:42.992670   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:42.992724   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:42.992663   68093 retry.go:31] will retry after 1.560294099s: waiting for machine to come up
	I1004 04:23:39.864408   66755 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.152629063s)
	I1004 04:23:39.864437   66755 crio.go:469] duration metric: took 2.152732931s to extract the tarball
	I1004 04:23:39.864446   66755 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:23:39.902496   66755 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:39.956348   66755 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:23:39.956373   66755 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:23:39.956381   66755 kubeadm.go:934] updating node { 192.168.61.74 8443 v1.31.1 crio true true} ...
	I1004 04:23:39.956509   66755 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-934812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:23:39.956572   66755 ssh_runner.go:195] Run: crio config
	I1004 04:23:40.014396   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:23:40.014423   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:23:40.014436   66755 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:23:40.014470   66755 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.74 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-934812 NodeName:embed-certs-934812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:23:40.014642   66755 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-934812"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:23:40.014728   66755 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:23:40.025328   66755 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:23:40.025441   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:23:40.035733   66755 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1004 04:23:40.057427   66755 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:23:40.078636   66755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1004 04:23:40.100583   66755 ssh_runner.go:195] Run: grep 192.168.61.74	control-plane.minikube.internal$ /etc/hosts
	I1004 04:23:40.104780   66755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:40.118484   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:40.245425   66755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:23:40.268739   66755 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812 for IP: 192.168.61.74
	I1004 04:23:40.268764   66755 certs.go:194] generating shared ca certs ...
	I1004 04:23:40.268792   66755 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:23:40.268962   66755 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:23:40.269022   66755 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:23:40.269035   66755 certs.go:256] generating profile certs ...
	I1004 04:23:40.269145   66755 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/client.key
	I1004 04:23:40.269226   66755 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.key.0181efa9
	I1004 04:23:40.269290   66755 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.key
	I1004 04:23:40.269436   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:23:40.269483   66755 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:23:40.269497   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:23:40.269535   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:23:40.269575   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:23:40.269607   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:23:40.269658   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:40.270269   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:23:40.316579   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:23:40.352928   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:23:40.383124   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:23:40.410211   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1004 04:23:40.442388   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:23:40.473580   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:23:40.501589   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:23:40.527299   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:23:40.551994   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:23:40.576644   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:23:40.601518   66755 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:23:40.620092   66755 ssh_runner.go:195] Run: openssl version
	I1004 04:23:40.626451   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:23:40.637754   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.642413   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.642472   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.648449   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:23:40.659371   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:23:40.670276   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.674793   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.674844   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.680550   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:23:40.691439   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:23:40.702237   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.706876   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.706937   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.712970   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:23:40.724505   66755 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:23:40.729486   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:23:40.735720   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:23:40.741680   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:23:40.747975   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:23:40.754056   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:23:40.760235   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:23:40.766463   66755 kubeadm.go:392] StartCluster: {Name:embed-certs-934812 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:23:40.766576   66755 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:23:40.766635   66755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:23:40.805927   66755 cri.go:89] found id: ""
	I1004 04:23:40.805995   66755 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:23:40.816693   66755 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:23:40.816717   66755 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:23:40.816770   66755 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:23:40.827024   66755 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:23:40.828056   66755 kubeconfig.go:125] found "embed-certs-934812" server: "https://192.168.61.74:8443"
	I1004 04:23:40.830076   66755 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:23:40.840637   66755 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.74
	I1004 04:23:40.840673   66755 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:23:40.840686   66755 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:23:40.840741   66755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:23:40.877659   66755 cri.go:89] found id: ""
	I1004 04:23:40.877737   66755 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:23:40.894712   66755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:23:40.904202   66755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:23:40.904224   66755 kubeadm.go:157] found existing configuration files:
	
	I1004 04:23:40.904290   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:23:40.913941   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:23:40.914003   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:23:40.924730   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:23:40.934706   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:23:40.934784   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:23:40.945008   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:23:40.954864   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:23:40.954949   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:23:40.965357   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:23:40.975380   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:23:40.975459   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:23:40.986157   66755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:23:41.001260   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:41.129150   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:41.839910   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.059079   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.132717   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.204227   66755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:23:42.204389   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:42.704572   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.205099   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.704555   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.720983   66755 api_server.go:72] duration metric: took 1.516755506s to wait for apiserver process to appear ...
	I1004 04:23:43.721020   66755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:23:43.721043   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.578729   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:23:46.578764   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:23:46.578780   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.611578   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:23:46.611609   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:23:46.721894   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.728611   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:46.728649   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:47.221889   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:47.229348   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:47.229382   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:47.721971   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:47.741433   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:47.741460   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:48.222154   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:48.226802   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 200:
	ok
	I1004 04:23:48.233611   66755 api_server.go:141] control plane version: v1.31.1
	I1004 04:23:48.233645   66755 api_server.go:131] duration metric: took 4.512616682s to wait for apiserver health ...
	I1004 04:23:48.233655   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:23:48.233662   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:23:48.235421   66755 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:23:44.555619   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:44.556128   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:44.556154   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:44.556061   68093 retry.go:31] will retry after 2.564674777s: waiting for machine to come up
	I1004 04:23:47.123819   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:47.124235   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:47.124263   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:47.124181   68093 retry.go:31] will retry after 2.408805702s: waiting for machine to come up
	I1004 04:23:48.236675   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:23:48.248304   66755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:23:48.273584   66755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:23:48.288132   66755 system_pods.go:59] 8 kube-system pods found
	I1004 04:23:48.288174   66755 system_pods.go:61] "coredns-7c65d6cfc9-z7pqn" [f206a8bf-5c18-49f2-9fae-a48a38d608a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:23:48.288208   66755 system_pods.go:61] "etcd-embed-certs-934812" [07a8f2db-6d47-469b-b0e4-749d1e106522] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:23:48.288218   66755 system_pods.go:61] "kube-apiserver-embed-certs-934812" [f36bc69a-a04e-40c2-8f78-a983ddbf28aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:23:48.288227   66755 system_pods.go:61] "kube-controller-manager-embed-certs-934812" [06d73118-fa31-4c98-b1e8-099611718b19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:23:48.288232   66755 system_pods.go:61] "kube-proxy-9qpgb" [6d833f16-4b8e-4409-99b6-214babe699c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 04:23:48.288238   66755 system_pods.go:61] "kube-scheduler-embed-certs-934812" [d076a245-49b6-4d8b-949a-2b559cd1d4d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:23:48.288243   66755 system_pods.go:61] "metrics-server-6867b74b74-d5b6b" [f4ec5d83-22a7-49e5-97e9-3519a29484fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:23:48.288250   66755 system_pods.go:61] "storage-provisioner" [2e76a95b-d6e2-4c1d-b954-3da8c2670a4a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 04:23:48.288259   66755 system_pods.go:74] duration metric: took 14.644463ms to wait for pod list to return data ...
	I1004 04:23:48.288265   66755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:23:48.293121   66755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:23:48.293153   66755 node_conditions.go:123] node cpu capacity is 2
	I1004 04:23:48.293166   66755 node_conditions.go:105] duration metric: took 4.895489ms to run NodePressure ...
	I1004 04:23:48.293184   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:48.633398   66755 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:23:48.639243   66755 kubeadm.go:739] kubelet initialised
	I1004 04:23:48.639282   66755 kubeadm.go:740] duration metric: took 5.842777ms waiting for restarted kubelet to initialise ...
	I1004 04:23:48.639293   66755 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:23:48.650460   66755 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:49.535979   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:49.536361   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:49.536388   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:49.536332   68093 retry.go:31] will retry after 4.242056709s: waiting for machine to come up
	I1004 04:23:50.657094   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:52.657717   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:55.089234   67541 start.go:364] duration metric: took 2m31.706739813s to acquireMachinesLock for "default-k8s-diff-port-281471"
	I1004 04:23:55.089300   67541 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:55.089311   67541 fix.go:54] fixHost starting: 
	I1004 04:23:55.089673   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:55.089718   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:55.110154   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I1004 04:23:55.110566   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:55.111001   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:23:55.111025   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:55.111417   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:55.111627   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:23:55.111794   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:23:55.113328   67541 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281471: state=Stopped err=<nil>
	I1004 04:23:55.113356   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	W1004 04:23:55.113537   67541 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:55.115190   67541 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-281471" ...
	I1004 04:23:53.783128   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.783631   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has current primary IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.783669   67282 main.go:141] libmachine: (old-k8s-version-420062) Found IP for machine: 192.168.50.146
	I1004 04:23:53.783684   67282 main.go:141] libmachine: (old-k8s-version-420062) Reserving static IP address...
	I1004 04:23:53.784173   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.784206   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | skip adding static IP to network mk-old-k8s-version-420062 - found existing host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"}
	I1004 04:23:53.784222   67282 main.go:141] libmachine: (old-k8s-version-420062) Reserved static IP address: 192.168.50.146
	I1004 04:23:53.784238   67282 main.go:141] libmachine: (old-k8s-version-420062) Waiting for SSH to be available...
	I1004 04:23:53.784250   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Getting to WaitForSSH function...
	I1004 04:23:53.786551   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.786985   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.787016   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.787207   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH client type: external
	I1004 04:23:53.787244   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa (-rw-------)
	I1004 04:23:53.787285   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:23:53.787301   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | About to run SSH command:
	I1004 04:23:53.787315   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | exit 0
	I1004 04:23:53.916121   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | SSH cmd err, output: <nil>: 
	I1004 04:23:53.916487   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetConfigRaw
	I1004 04:23:53.917200   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:53.919846   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.920295   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.920323   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.920641   67282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/config.json ...
	I1004 04:23:53.920902   67282 machine.go:93] provisionDockerMachine start ...
	I1004 04:23:53.920930   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:53.921137   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:53.923647   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.924000   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.924039   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.924198   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:53.924375   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:53.924508   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:53.924659   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:53.924796   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:53.925024   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:53.925036   67282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:23:54.044565   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:23:54.044595   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.044820   67282 buildroot.go:166] provisioning hostname "old-k8s-version-420062"
	I1004 04:23:54.044837   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.045006   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.047682   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.048032   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.048060   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.048186   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.048376   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.048525   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.048694   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.048853   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.049077   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.049098   67282 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-420062 && echo "old-k8s-version-420062" | sudo tee /etc/hostname
	I1004 04:23:54.183772   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-420062
	
	I1004 04:23:54.183835   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.186969   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.187333   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.187368   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.187754   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.188000   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.188177   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.188334   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.188559   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.188778   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.188803   67282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-420062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-420062/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-420062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:23:54.313827   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:54.313852   67282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:23:54.313896   67282 buildroot.go:174] setting up certificates
	I1004 04:23:54.313913   67282 provision.go:84] configureAuth start
	I1004 04:23:54.313925   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.314208   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:54.317028   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.317378   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.317408   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.317549   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.320292   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.320690   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.320718   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.320874   67282 provision.go:143] copyHostCerts
	I1004 04:23:54.320945   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:23:54.320957   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:23:54.321020   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:23:54.321144   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:23:54.321157   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:23:54.321184   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:23:54.321269   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:23:54.321279   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:23:54.321306   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:23:54.321378   67282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-420062 san=[127.0.0.1 192.168.50.146 localhost minikube old-k8s-version-420062]
	I1004 04:23:54.395370   67282 provision.go:177] copyRemoteCerts
	I1004 04:23:54.395422   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:23:54.395452   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.398647   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.399153   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.399194   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.399392   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.399582   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.399852   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.399991   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:54.491055   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:23:54.523206   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1004 04:23:54.549843   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:23:54.580403   67282 provision.go:87] duration metric: took 266.475364ms to configureAuth
	I1004 04:23:54.580438   67282 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:23:54.580645   67282 config.go:182] Loaded profile config "old-k8s-version-420062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1004 04:23:54.580736   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.583200   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.583489   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.583522   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.583672   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.583871   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.584066   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.584195   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.584402   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.584567   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.584582   67282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:23:54.835402   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:23:54.835436   67282 machine.go:96] duration metric: took 914.509404ms to provisionDockerMachine
	I1004 04:23:54.835451   67282 start.go:293] postStartSetup for "old-k8s-version-420062" (driver="kvm2")
	I1004 04:23:54.835466   67282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:23:54.835491   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:54.835870   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:23:54.835902   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.838257   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.838645   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.838670   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.838810   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.838972   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.839117   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.839247   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:54.927041   67282 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:23:54.931330   67282 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:23:54.931357   67282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:23:54.931424   67282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:23:54.931538   67282 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:23:54.931658   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:23:54.941402   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:54.967433   67282 start.go:296] duration metric: took 131.968424ms for postStartSetup
	I1004 04:23:54.967495   67282 fix.go:56] duration metric: took 20.29830643s for fixHost
	I1004 04:23:54.967523   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.970138   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.970485   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.970502   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.970802   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.971000   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.971164   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.971330   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.971560   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.971739   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.971751   67282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:23:55.089031   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015835.056238818
	
	I1004 04:23:55.089054   67282 fix.go:216] guest clock: 1728015835.056238818
	I1004 04:23:55.089063   67282 fix.go:229] Guest: 2024-10-04 04:23:55.056238818 +0000 UTC Remote: 2024-10-04 04:23:54.967501465 +0000 UTC m=+186.499621032 (delta=88.737353ms)
	I1004 04:23:55.089086   67282 fix.go:200] guest clock delta is within tolerance: 88.737353ms
	I1004 04:23:55.089093   67282 start.go:83] releasing machines lock for "old-k8s-version-420062", held for 20.419961099s
	I1004 04:23:55.089124   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.089472   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:55.092047   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.092519   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.092552   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.092784   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093364   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093566   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093670   67282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:23:55.093715   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:55.093808   67282 ssh_runner.go:195] Run: cat /version.json
	I1004 04:23:55.093834   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:55.096451   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.096829   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.096862   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.096881   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.097173   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:55.097364   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:55.097446   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.097474   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.097548   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:55.097685   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:55.097816   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:55.097823   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:55.097953   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:55.098106   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:55.207195   67282 ssh_runner.go:195] Run: systemctl --version
	I1004 04:23:55.214080   67282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:23:55.369882   67282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:23:55.376111   67282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:23:55.376171   67282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:23:55.393916   67282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:23:55.393945   67282 start.go:495] detecting cgroup driver to use...
	I1004 04:23:55.394015   67282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:23:55.411330   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:23:55.427665   67282 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:23:55.427734   67282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:23:55.445180   67282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:23:55.465131   67282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:23:55.596260   67282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:23:55.781647   67282 docker.go:233] disabling docker service ...
	I1004 04:23:55.781711   67282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:23:55.801252   67282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:23:55.817688   67282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:23:55.952563   67282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:23:56.081096   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:23:56.096194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:23:56.116859   67282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1004 04:23:56.116924   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.129060   67282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:23:56.129133   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.141246   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.158759   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.172580   67282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:23:56.192027   67282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:23:56.206698   67282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:23:56.206757   67282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:23:56.223074   67282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:23:56.241061   67282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:56.365616   67282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:23:56.474445   67282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:23:56.474519   67282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:23:56.480077   67282 start.go:563] Will wait 60s for crictl version
	I1004 04:23:56.480133   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:23:56.485207   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:23:56.537710   67282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:23:56.537802   67282 ssh_runner.go:195] Run: crio --version
	I1004 04:23:56.571679   67282 ssh_runner.go:195] Run: crio --version
	I1004 04:23:56.605639   67282 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1004 04:23:55.116525   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Start
	I1004 04:23:55.116723   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring networks are active...
	I1004 04:23:55.117665   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring network default is active
	I1004 04:23:55.118079   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring network mk-default-k8s-diff-port-281471 is active
	I1004 04:23:55.118565   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Getting domain xml...
	I1004 04:23:55.119417   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Creating domain...
	I1004 04:23:56.429715   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting to get IP...
	I1004 04:23:56.430752   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.431261   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.431353   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.431245   68239 retry.go:31] will retry after 200.843618ms: waiting for machine to come up
	I1004 04:23:56.633542   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.633974   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.634003   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.633923   68239 retry.go:31] will retry after 291.906374ms: waiting for machine to come up
	I1004 04:23:56.927325   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.927847   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.927880   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.927813   68239 retry.go:31] will retry after 374.509137ms: waiting for machine to come up
	I1004 04:23:57.304251   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.304713   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.304738   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:57.304671   68239 retry.go:31] will retry after 583.046975ms: waiting for machine to come up
	I1004 04:23:57.889410   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.889837   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.889868   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:57.889795   68239 retry.go:31] will retry after 549.483036ms: waiting for machine to come up
	I1004 04:23:56.606945   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:56.610421   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:56.610952   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:56.610976   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:56.611373   67282 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1004 04:23:56.615872   67282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:56.629783   67282 kubeadm.go:883] updating cluster {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:23:56.629932   67282 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 04:23:56.629983   67282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:56.690260   67282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:23:56.690343   67282 ssh_runner.go:195] Run: which lz4
	I1004 04:23:56.695808   67282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:23:56.701593   67282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:23:56.701623   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1004 04:23:54.156612   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"True"
	I1004 04:23:54.156637   66755 pod_ready.go:82] duration metric: took 5.506141622s for pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:54.156646   66755 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:56.164534   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:58.166994   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:58.440643   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:58.441080   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:58.441109   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:58.441034   68239 retry.go:31] will retry after 585.437747ms: waiting for machine to come up
	I1004 04:23:59.027951   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.028414   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.028441   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:59.028369   68239 retry.go:31] will retry after 773.32668ms: waiting for machine to come up
	I1004 04:23:59.803329   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.803752   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.803793   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:59.803722   68239 retry.go:31] will retry after 936.396482ms: waiting for machine to come up
	I1004 04:24:00.741805   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:00.742328   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:00.742372   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:00.742262   68239 retry.go:31] will retry after 1.294836266s: waiting for machine to come up
	I1004 04:24:02.038222   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:02.038750   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:02.038785   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:02.038699   68239 retry.go:31] will retry after 2.282660025s: waiting for machine to come up
	I1004 04:23:58.525796   67282 crio.go:462] duration metric: took 1.830039762s to copy over tarball
	I1004 04:23:58.525868   67282 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:24:01.514552   67282 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.98865618s)
	I1004 04:24:01.514585   67282 crio.go:469] duration metric: took 2.988759159s to extract the tarball
	I1004 04:24:01.514595   67282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:24:01.562130   67282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:01.598856   67282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:24:01.598882   67282 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:24:01.598960   67282 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:01.599035   67282 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.599047   67282 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.599048   67282 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1004 04:24:01.599020   67282 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.599025   67282 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.598967   67282 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.598967   67282 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.600760   67282 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.600772   67282 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1004 04:24:01.600767   67282 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:01.600791   67282 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.600802   67282 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.600804   67282 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.600807   67282 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.600840   67282 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.837527   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.877366   67282 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1004 04:24:01.877413   67282 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.877464   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:01.882328   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.914693   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.934055   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.941737   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.943929   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.944540   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.948337   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.970977   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.995537   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1004 04:24:02.127073   67282 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1004 04:24:02.127097   67282 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1004 04:24:02.127121   67282 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.127121   67282 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.127156   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.127159   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128471   67282 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1004 04:24:02.128532   67282 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.128535   67282 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1004 04:24:02.128560   67282 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.128571   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128595   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128598   67282 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1004 04:24:02.128627   67282 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.128669   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128730   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1004 04:24:02.128761   67282 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1004 04:24:02.128783   67282 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1004 04:24:02.128815   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.133675   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.133724   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.141911   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.141950   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.141989   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.142044   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.263733   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.263744   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.263798   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.265990   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.297523   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.297566   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.379282   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.379318   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.379331   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.417271   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.454521   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.454559   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.496644   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1004 04:24:02.533632   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1004 04:24:02.533690   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1004 04:24:02.533750   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1004 04:24:02.568138   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1004 04:24:02.568153   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1004 04:24:02.911933   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:03.055844   67282 cache_images.go:92] duration metric: took 1.456943316s to LoadCachedImages
	W1004 04:24:03.055959   67282 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1004 04:24:03.055976   67282 kubeadm.go:934] updating node { 192.168.50.146 8443 v1.20.0 crio true true} ...
	I1004 04:24:03.056087   67282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-420062 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:03.056162   67282 ssh_runner.go:195] Run: crio config
	I1004 04:24:03.103752   67282 cni.go:84] Creating CNI manager for ""
	I1004 04:24:03.103792   67282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:03.103805   67282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:03.103826   67282 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.146 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-420062 NodeName:old-k8s-version-420062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1004 04:24:03.103952   67282 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-420062"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:03.104008   67282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1004 04:24:03.114316   67282 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:03.114372   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:03.124059   67282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1004 04:24:03.143310   67282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:03.161143   67282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1004 04:24:03.178444   67282 ssh_runner.go:195] Run: grep 192.168.50.146	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:03.182235   67282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:03.195103   67282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:03.317820   67282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:03.334820   67282 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062 for IP: 192.168.50.146
	I1004 04:24:03.334840   67282 certs.go:194] generating shared ca certs ...
	I1004 04:24:03.334855   67282 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:03.335008   67282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:03.335049   67282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:03.335059   67282 certs.go:256] generating profile certs ...
	I1004 04:24:03.335156   67282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.key
	I1004 04:24:03.335212   67282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key.c1f9ed6b
	I1004 04:24:03.335260   67282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key
	I1004 04:24:03.335368   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:03.335394   67282 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:03.335401   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:03.335426   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:03.335451   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:03.335476   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:03.335518   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:03.336260   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:03.373985   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:03.408150   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:03.444219   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:03.493160   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1004 04:24:00.665171   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:02.815874   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:04.022715   66755 pod_ready.go:93] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.022744   66755 pod_ready.go:82] duration metric: took 9.866089641s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.022756   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.028094   66755 pod_ready.go:93] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.028115   66755 pod_ready.go:82] duration metric: took 5.350911ms for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.028123   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.033106   66755 pod_ready.go:93] pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.033124   66755 pod_ready.go:82] duration metric: took 4.995208ms for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.033132   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9qpgb" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.037388   66755 pod_ready.go:93] pod "kube-proxy-9qpgb" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.037409   66755 pod_ready.go:82] duration metric: took 4.270278ms for pod "kube-proxy-9qpgb" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.037420   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.042717   66755 pod_ready.go:93] pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.042737   66755 pod_ready.go:82] duration metric: took 5.30887ms for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.042747   66755 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.324259   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:04.324749   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:04.324811   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:04.324726   68239 retry.go:31] will retry after 2.070089599s: waiting for machine to come up
	I1004 04:24:06.396547   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:06.396991   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:06.397015   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:06.396944   68239 retry.go:31] will retry after 3.403718824s: waiting for machine to come up
	I1004 04:24:03.533084   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:24:03.565405   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:03.613938   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:24:03.642711   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:03.674784   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:03.706968   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:03.731329   67282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:03.749003   67282 ssh_runner.go:195] Run: openssl version
	I1004 04:24:03.755219   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:03.766499   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.771322   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.771413   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.778185   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:03.790581   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:03.802556   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.807312   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.807373   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.813595   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:03.825043   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:03.835389   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.840004   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.840051   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.847540   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:03.862303   67282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:03.868029   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:03.874811   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:03.880797   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:03.886622   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:03.892273   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:03.898129   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:03.905775   67282 kubeadm.go:392] StartCluster: {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:03.905852   67282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:03.905890   67282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:03.954627   67282 cri.go:89] found id: ""
	I1004 04:24:03.954702   67282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:03.965146   67282 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:03.965170   67282 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:03.965236   67282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:03.975404   67282 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:03.976362   67282 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-420062" does not appear in /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:24:03.976990   67282 kubeconfig.go:62] /home/jenkins/minikube-integration/19546-9647/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-420062" cluster setting kubeconfig missing "old-k8s-version-420062" context setting]
	I1004 04:24:03.977906   67282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:03.979485   67282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:03.989487   67282 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.146
	I1004 04:24:03.989517   67282 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:03.989529   67282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:03.989577   67282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:04.031536   67282 cri.go:89] found id: ""
	I1004 04:24:04.031607   67282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:04.048652   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:04.057813   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:04.057830   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:04.057867   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:24:04.066213   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:04.066252   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:04.074904   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:24:04.083485   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:04.083522   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:04.092314   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:24:04.100528   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:04.100572   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:04.109232   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:24:04.118051   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:04.118091   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:04.127430   67282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:04.137949   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:04.272627   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:04.940435   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.181288   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.268873   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.373549   67282 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:05.373653   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:05.874543   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.374154   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.874343   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:07.374744   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:07.874734   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:08.374255   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.050700   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:08.548473   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:09.802504   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:09.802912   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:09.802937   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:09.802870   68239 retry.go:31] will retry after 3.430575602s: waiting for machine to come up
	I1004 04:24:13.236792   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.237230   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Found IP for machine: 192.168.39.201
	I1004 04:24:13.237251   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Reserving static IP address...
	I1004 04:24:13.237268   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has current primary IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.237712   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-281471", mac: "52:54:00:cd:36:92", ip: "192.168.39.201"} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.237745   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Reserved static IP address: 192.168.39.201
	I1004 04:24:13.237765   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | skip adding static IP to network mk-default-k8s-diff-port-281471 - found existing host DHCP lease matching {name: "default-k8s-diff-port-281471", mac: "52:54:00:cd:36:92", ip: "192.168.39.201"}
	I1004 04:24:13.237786   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Getting to WaitForSSH function...
	I1004 04:24:13.237805   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for SSH to be available...
	I1004 04:24:13.240068   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.240354   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.240384   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.240514   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Using SSH client type: external
	I1004 04:24:13.240540   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa (-rw-------)
	I1004 04:24:13.240577   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:24:13.240594   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | About to run SSH command:
	I1004 04:24:13.240608   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | exit 0
	I1004 04:24:08.874627   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:09.374627   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:09.874278   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.374675   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.873949   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:11.373966   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:11.873775   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:12.373874   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:12.874010   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:13.374575   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.550171   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:13.049596   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:14.741098   66293 start.go:364] duration metric: took 53.770546651s to acquireMachinesLock for "no-preload-658545"
	I1004 04:24:14.741156   66293 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:24:14.741164   66293 fix.go:54] fixHost starting: 
	I1004 04:24:14.741565   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:14.741595   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:14.758364   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37947
	I1004 04:24:14.758823   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:14.759356   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:24:14.759383   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:14.759700   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:14.759895   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:14.760077   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:24:14.761849   66293 fix.go:112] recreateIfNeeded on no-preload-658545: state=Stopped err=<nil>
	I1004 04:24:14.761873   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	W1004 04:24:14.762037   66293 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:24:14.764123   66293 out.go:177] * Restarting existing kvm2 VM for "no-preload-658545" ...
	I1004 04:24:13.371830   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | SSH cmd err, output: <nil>: 
	I1004 04:24:13.372219   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetConfigRaw
	I1004 04:24:13.372817   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:13.375676   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.376080   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.376116   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.376393   67541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/config.json ...
	I1004 04:24:13.376616   67541 machine.go:93] provisionDockerMachine start ...
	I1004 04:24:13.376638   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:13.376845   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.379413   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.379847   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.379908   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.380015   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.380204   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.380360   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.380493   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.380657   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.380913   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.380988   67541 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:24:13.492488   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:24:13.492528   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.492749   67541 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281471"
	I1004 04:24:13.492768   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.492928   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.495691   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.496003   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.496031   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.496160   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.496368   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.496530   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.496651   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.496785   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.497017   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.497034   67541 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-281471 && echo "default-k8s-diff-port-281471" | sudo tee /etc/hostname
	I1004 04:24:13.627336   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-281471
	
	I1004 04:24:13.627364   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.630757   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.631162   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.631199   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.631486   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.631701   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.631874   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.632018   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.632216   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.632431   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.632457   67541 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-281471' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-281471/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-281471' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:24:13.758386   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:24:13.758413   67541 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:24:13.758462   67541 buildroot.go:174] setting up certificates
	I1004 04:24:13.758472   67541 provision.go:84] configureAuth start
	I1004 04:24:13.758484   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.758740   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:13.761590   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.761899   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.761939   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.762068   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.764293   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.764644   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.764672   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.764811   67541 provision.go:143] copyHostCerts
	I1004 04:24:13.764869   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:24:13.764880   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:24:13.764936   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:24:13.765046   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:24:13.765055   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:24:13.765075   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:24:13.765127   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:24:13.765135   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:24:13.765160   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:24:13.765235   67541 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-281471 san=[127.0.0.1 192.168.39.201 default-k8s-diff-port-281471 localhost minikube]
	I1004 04:24:14.075640   67541 provision.go:177] copyRemoteCerts
	I1004 04:24:14.075698   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:24:14.075722   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.078293   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.078658   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.078689   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.078827   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.079048   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.079213   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.079348   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.167232   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:24:14.193065   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1004 04:24:14.218112   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:24:14.243281   67541 provision.go:87] duration metric: took 484.783764ms to configureAuth
	I1004 04:24:14.243310   67541 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:24:14.243506   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:14.243593   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.246497   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.246837   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.246885   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.247019   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.247211   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.247384   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.247551   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.247719   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:14.247909   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:14.247923   67541 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:24:14.487651   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:24:14.487675   67541 machine.go:96] duration metric: took 1.11104473s to provisionDockerMachine
	I1004 04:24:14.487686   67541 start.go:293] postStartSetup for "default-k8s-diff-port-281471" (driver="kvm2")
	I1004 04:24:14.487696   67541 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:24:14.487733   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.488084   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:24:14.488114   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.490844   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.491198   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.491229   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.491372   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.491562   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.491700   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.491815   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.579398   67541 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:24:14.584068   67541 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:24:14.584098   67541 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:24:14.584179   67541 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:24:14.584274   67541 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:24:14.584379   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:24:14.594853   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:14.621833   67541 start.go:296] duration metric: took 134.135256ms for postStartSetup
	I1004 04:24:14.621874   67541 fix.go:56] duration metric: took 19.532563115s for fixHost
	I1004 04:24:14.621895   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.625077   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.625418   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.625443   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.625678   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.625900   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.626059   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.626205   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.626373   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:14.626589   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:14.626603   67541 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:24:14.740932   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015854.697826512
	
	I1004 04:24:14.740950   67541 fix.go:216] guest clock: 1728015854.697826512
	I1004 04:24:14.740957   67541 fix.go:229] Guest: 2024-10-04 04:24:14.697826512 +0000 UTC Remote: 2024-10-04 04:24:14.621877739 +0000 UTC m=+171.379203860 (delta=75.948773ms)
	I1004 04:24:14.741000   67541 fix.go:200] guest clock delta is within tolerance: 75.948773ms
	I1004 04:24:14.741007   67541 start.go:83] releasing machines lock for "default-k8s-diff-port-281471", held for 19.651737082s
	I1004 04:24:14.741031   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.741291   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:14.744142   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.744498   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.744518   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.744720   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745368   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745559   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745665   67541 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:24:14.745706   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.745802   67541 ssh_runner.go:195] Run: cat /version.json
	I1004 04:24:14.745843   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.748443   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748779   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.748813   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748838   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748927   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.749064   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.749245   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.749267   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.749283   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.749441   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.749481   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.749589   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.749725   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.749856   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.833632   67541 ssh_runner.go:195] Run: systemctl --version
	I1004 04:24:14.863812   67541 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:24:15.016823   67541 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:24:15.023613   67541 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:24:15.023696   67541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:24:15.042546   67541 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:24:15.042576   67541 start.go:495] detecting cgroup driver to use...
	I1004 04:24:15.042645   67541 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:24:15.060267   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:24:15.076088   67541 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:24:15.076155   67541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:24:15.091741   67541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:24:15.107153   67541 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:24:15.230591   67541 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:24:15.381704   67541 docker.go:233] disabling docker service ...
	I1004 04:24:15.381776   67541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:24:15.397616   67541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:24:15.412350   67541 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:24:15.569525   67541 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:24:15.690120   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:24:15.705348   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:24:15.728253   67541 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:24:15.728334   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.739875   67541 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:24:15.739951   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.751997   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.765898   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.777917   67541 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:24:15.791235   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.802390   67541 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.825385   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.837278   67541 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:24:15.848791   67541 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:24:15.848864   67541 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:24:15.870774   67541 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:24:15.883544   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:15.997406   67541 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:24:16.095391   67541 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:24:16.095508   67541 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:24:16.102427   67541 start.go:563] Will wait 60s for crictl version
	I1004 04:24:16.102510   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:24:16.106958   67541 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:24:16.150721   67541 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:24:16.150824   67541 ssh_runner.go:195] Run: crio --version
	I1004 04:24:16.181714   67541 ssh_runner.go:195] Run: crio --version
	I1004 04:24:16.214202   67541 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:24:16.215583   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:16.218418   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:16.218800   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:16.218831   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:16.219002   67541 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 04:24:16.223382   67541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:16.236443   67541 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:24:16.236565   67541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:24:16.236652   67541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:16.279095   67541 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:24:16.279158   67541 ssh_runner.go:195] Run: which lz4
	I1004 04:24:16.283684   67541 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:24:16.288436   67541 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:24:16.288472   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:24:17.853549   67541 crio.go:462] duration metric: took 1.569889689s to copy over tarball
	I1004 04:24:17.853631   67541 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:24:14.765651   66293 main.go:141] libmachine: (no-preload-658545) Calling .Start
	I1004 04:24:14.765886   66293 main.go:141] libmachine: (no-preload-658545) Ensuring networks are active...
	I1004 04:24:14.766761   66293 main.go:141] libmachine: (no-preload-658545) Ensuring network default is active
	I1004 04:24:14.767179   66293 main.go:141] libmachine: (no-preload-658545) Ensuring network mk-no-preload-658545 is active
	I1004 04:24:14.767706   66293 main.go:141] libmachine: (no-preload-658545) Getting domain xml...
	I1004 04:24:14.768478   66293 main.go:141] libmachine: (no-preload-658545) Creating domain...
	I1004 04:24:16.087556   66293 main.go:141] libmachine: (no-preload-658545) Waiting to get IP...
	I1004 04:24:16.088628   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.089032   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.089093   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.089008   68422 retry.go:31] will retry after 276.442313ms: waiting for machine to come up
	I1004 04:24:16.367448   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.367923   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.367953   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.367894   68422 retry.go:31] will retry after 291.504157ms: waiting for machine to come up
	I1004 04:24:16.661396   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.661958   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.662009   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.661932   68422 retry.go:31] will retry after 378.34293ms: waiting for machine to come up
	I1004 04:24:17.041431   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:17.041942   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:17.041970   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:17.041916   68422 retry.go:31] will retry after 553.613866ms: waiting for machine to come up
	I1004 04:24:17.596745   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:17.597294   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:17.597327   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:17.597259   68422 retry.go:31] will retry after 611.098402ms: waiting for machine to come up
	I1004 04:24:18.210083   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:18.210569   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:18.210592   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:18.210530   68422 retry.go:31] will retry after 691.8822ms: waiting for machine to come up
	I1004 04:24:13.873857   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:14.374241   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:14.873863   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.374063   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.873950   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:16.373819   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:16.874290   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:17.374357   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:17.874163   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:18.374160   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.049926   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:17.051060   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:20.132987   67541 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.279324141s)
	I1004 04:24:20.133023   67541 crio.go:469] duration metric: took 2.279442603s to extract the tarball
	I1004 04:24:20.133033   67541 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:24:20.171805   67541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:20.217431   67541 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:24:20.217458   67541 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:24:20.217468   67541 kubeadm.go:934] updating node { 192.168.39.201 8444 v1.31.1 crio true true} ...
	I1004 04:24:20.217586   67541 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-281471 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:20.217687   67541 ssh_runner.go:195] Run: crio config
	I1004 04:24:20.269529   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:24:20.269559   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:20.269569   67541 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:20.269604   67541 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.201 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-281471 NodeName:default-k8s-diff-port-281471 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:24:20.269822   67541 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.201
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-281471"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:20.269913   67541 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:24:20.281286   67541 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:20.281368   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:20.292186   67541 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1004 04:24:20.310972   67541 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:20.329420   67541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1004 04:24:20.348358   67541 ssh_runner.go:195] Run: grep 192.168.39.201	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:20.352641   67541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:20.366317   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:20.499648   67541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:20.518930   67541 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471 for IP: 192.168.39.201
	I1004 04:24:20.518954   67541 certs.go:194] generating shared ca certs ...
	I1004 04:24:20.518971   67541 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:20.519121   67541 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:20.519167   67541 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:20.519177   67541 certs.go:256] generating profile certs ...
	I1004 04:24:20.519279   67541 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/client.key
	I1004 04:24:20.519347   67541 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.key.6cd63ef9
	I1004 04:24:20.519381   67541 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.key
	I1004 04:24:20.519492   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:20.519527   67541 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:20.519539   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:20.519570   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:20.519614   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:20.519643   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:20.519710   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:20.520418   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:20.566110   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:20.613646   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:20.648416   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:20.678840   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1004 04:24:20.722021   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:24:20.749381   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:20.776777   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 04:24:20.803998   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:20.833182   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:20.859600   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:20.887732   67541 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:20.910566   67541 ssh_runner.go:195] Run: openssl version
	I1004 04:24:20.917151   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:20.930475   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.935819   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.935895   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.942607   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:20.954950   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:20.967348   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.972468   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.972543   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.979061   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:20.992010   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:21.008370   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.015101   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.015161   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.023491   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:21.035766   67541 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:21.041416   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:21.048405   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:21.055468   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:21.062228   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:21.068967   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:21.075984   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:21.086088   67541 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:21.086196   67541 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:21.086253   67541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:21.131997   67541 cri.go:89] found id: ""
	I1004 04:24:21.132061   67541 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:21.145219   67541 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:21.145237   67541 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:21.145289   67541 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:21.157041   67541 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:21.158724   67541 kubeconfig.go:125] found "default-k8s-diff-port-281471" server: "https://192.168.39.201:8444"
	I1004 04:24:21.162295   67541 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:21.173771   67541 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.201
	I1004 04:24:21.173806   67541 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:21.173820   67541 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:21.173891   67541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:21.215149   67541 cri.go:89] found id: ""
	I1004 04:24:21.215216   67541 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:21.234432   67541 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:21.245688   67541 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:21.245707   67541 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:21.245758   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1004 04:24:21.256101   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:21.256168   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:21.267319   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1004 04:24:21.279995   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:21.280050   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:21.292588   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1004 04:24:21.304478   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:21.304545   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:21.317012   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1004 04:24:21.328769   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:21.328853   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:21.341597   67541 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:21.353901   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:21.483705   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.340208   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.582628   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.662202   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.773206   67541 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:22.773327   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.274151   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:18.903981   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:18.904373   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:18.904398   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:18.904331   68422 retry.go:31] will retry after 1.022635653s: waiting for machine to come up
	I1004 04:24:19.929163   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:19.929707   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:19.929749   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:19.929656   68422 retry.go:31] will retry after 939.130061ms: waiting for machine to come up
	I1004 04:24:20.870067   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:20.870578   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:20.870606   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:20.870521   68422 retry.go:31] will retry after 1.673919202s: waiting for machine to come up
	I1004 04:24:22.546229   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:22.546621   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:22.546650   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:22.546569   68422 retry.go:31] will retry after 1.962556159s: waiting for machine to come up
	I1004 04:24:18.874214   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.374670   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.874355   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:20.374335   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:20.874299   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:21.374492   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:21.874293   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:22.373890   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:22.874622   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.374639   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.552128   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:22.050844   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:24.051071   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:23.774477   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.807536   67541 api_server.go:72] duration metric: took 1.034328656s to wait for apiserver process to appear ...
	I1004 04:24:23.807569   67541 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:24:23.807593   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.646266   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:26.646299   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:26.646319   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.696828   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:26.696856   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:26.808107   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.819887   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:26.819947   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:27.308535   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:27.317320   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:27.317372   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:27.807868   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:27.817762   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:27.817805   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:28.307660   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:28.313515   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 200:
	ok
	I1004 04:24:28.320539   67541 api_server.go:141] control plane version: v1.31.1
	I1004 04:24:28.320568   67541 api_server.go:131] duration metric: took 4.512991081s to wait for apiserver health ...
	I1004 04:24:28.320578   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:24:28.320586   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:28.322138   67541 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:24:24.511356   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:24.511886   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:24.511917   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:24.511843   68422 retry.go:31] will retry after 2.5950382s: waiting for machine to come up
	I1004 04:24:27.109018   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:27.109474   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:27.109503   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:27.109451   68422 retry.go:31] will retry after 2.984182925s: waiting for machine to come up
	I1004 04:24:23.873822   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:24.373911   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:24.874756   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:25.374035   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:25.873874   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.374503   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.874371   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:27.374335   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:27.873941   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:28.373861   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.550974   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:28.552007   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:28.323513   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:24:28.336556   67541 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:24:28.358371   67541 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:24:28.373163   67541 system_pods.go:59] 8 kube-system pods found
	I1004 04:24:28.373204   67541 system_pods.go:61] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:24:28.373217   67541 system_pods.go:61] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:24:28.373228   67541 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:24:28.373239   67541 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:24:28.373246   67541 system_pods.go:61] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:24:28.373256   67541 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:24:28.373267   67541 system_pods.go:61] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:24:28.373273   67541 system_pods.go:61] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:24:28.373283   67541 system_pods.go:74] duration metric: took 14.891267ms to wait for pod list to return data ...
	I1004 04:24:28.373294   67541 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:24:28.378226   67541 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:24:28.378269   67541 node_conditions.go:123] node cpu capacity is 2
	I1004 04:24:28.378285   67541 node_conditions.go:105] duration metric: took 4.985167ms to run NodePressure ...
	I1004 04:24:28.378309   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:28.649369   67541 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:24:28.654563   67541 kubeadm.go:739] kubelet initialised
	I1004 04:24:28.654584   67541 kubeadm.go:740] duration metric: took 5.188927ms waiting for restarted kubelet to initialise ...
	I1004 04:24:28.654591   67541 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:28.662152   67541 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.668248   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.668278   67541 pod_ready.go:82] duration metric: took 6.099746ms for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.668287   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.668294   67541 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.675790   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.675811   67541 pod_ready.go:82] duration metric: took 7.509617ms for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.675823   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.675830   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.683763   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.683811   67541 pod_ready.go:82] duration metric: took 7.972006ms for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.683830   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.683839   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.761974   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.762006   67541 pod_ready.go:82] duration metric: took 78.154275ms for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.762021   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.762030   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.162590   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-proxy-4nnld" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.162623   67541 pod_ready.go:82] duration metric: took 400.583388ms for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.162634   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-proxy-4nnld" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.162643   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.562557   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.562584   67541 pod_ready.go:82] duration metric: took 399.929497ms for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.562595   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.562602   67541 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.963502   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.963528   67541 pod_ready.go:82] duration metric: took 400.919452ms for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.963539   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.963547   67541 pod_ready.go:39] duration metric: took 1.308947485s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:29.963561   67541 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:24:29.976241   67541 ops.go:34] apiserver oom_adj: -16
	I1004 04:24:29.976268   67541 kubeadm.go:597] duration metric: took 8.831025549s to restartPrimaryControlPlane
	I1004 04:24:29.976278   67541 kubeadm.go:394] duration metric: took 8.890203906s to StartCluster
	I1004 04:24:29.976295   67541 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:29.976372   67541 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:24:29.977898   67541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:29.978168   67541 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:24:29.978222   67541 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:24:29.978306   67541 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978330   67541 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-281471"
	W1004 04:24:29.978341   67541 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:24:29.978329   67541 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978353   67541 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978369   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:29.978367   67541 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-281471"
	I1004 04:24:29.978377   67541 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-281471"
	W1004 04:24:29.978387   67541 addons.go:243] addon metrics-server should already be in state true
	I1004 04:24:29.978413   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:29.978464   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:29.978731   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978783   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.978818   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978871   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.978839   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978970   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.979903   67541 out.go:177] * Verifying Kubernetes components...
	I1004 04:24:29.981432   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:29.994332   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42631
	I1004 04:24:29.994917   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:29.995488   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:29.995503   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:29.995865   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:29.996675   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:29.999180   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I1004 04:24:29.999220   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I1004 04:24:29.999564   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:29.999651   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.000157   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.000182   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.000262   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.000281   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.000379   67541 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-281471"
	W1004 04:24:30.000398   67541 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:24:30.000429   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:30.000613   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.000646   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.000790   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.000812   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.001163   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.001215   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.001259   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.001307   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.016576   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I1004 04:24:30.016650   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41997
	I1004 04:24:30.016796   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44507
	I1004 04:24:30.016993   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017079   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017138   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017536   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017557   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017548   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017584   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017537   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017621   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017929   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.017931   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.017970   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.018100   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.018152   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.018559   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.018600   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.020021   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.020637   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.022016   67541 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:30.022018   67541 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:24:30.023395   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:24:30.023417   67541 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:24:30.023444   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.023489   67541 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:24:30.023506   67541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:24:30.023528   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.027678   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028005   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028129   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.028180   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028538   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.028552   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.028560   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028724   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.028750   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.028881   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.028911   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.029013   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.029055   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.029124   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.037309   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46395
	I1004 04:24:30.037846   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.038328   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.038355   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.038683   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.038850   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.040366   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.040572   67541 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:24:30.040586   67541 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:24:30.040602   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.043618   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.044070   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.044092   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.044232   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.044413   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.044541   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.044687   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.194435   67541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:30.223577   67541 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-281471" to be "Ready" ...
	I1004 04:24:30.277458   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:24:30.316201   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:24:30.316227   67541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:24:30.333635   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:24:30.346511   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:24:30.346549   67541 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:24:30.405197   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:24:30.405219   67541 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:24:30.465174   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:24:31.307064   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307137   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307430   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.307442   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.307469   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.307546   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307574   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307691   67541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030198983s)
	I1004 04:24:31.307733   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307747   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307789   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.307811   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.309264   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.309275   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.309281   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.309291   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.309299   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.309538   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.309568   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.309583   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.315635   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.315653   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.315917   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.315933   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.411630   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.411658   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.411934   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.411951   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.411965   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.411983   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.411997   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.412221   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.412261   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.412274   67541 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-281471"
	I1004 04:24:31.412283   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.414267   67541 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1004 04:24:31.415607   67541 addons.go:510] duration metric: took 1.43738386s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1004 04:24:32.227563   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:30.095611   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:30.096032   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:30.096061   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:30.095981   68422 retry.go:31] will retry after 2.833386023s: waiting for machine to come up
	I1004 04:24:32.933027   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.933509   66293 main.go:141] libmachine: (no-preload-658545) Found IP for machine: 192.168.72.54
	I1004 04:24:32.933538   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has current primary IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.933544   66293 main.go:141] libmachine: (no-preload-658545) Reserving static IP address...
	I1004 04:24:32.933950   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "no-preload-658545", mac: "52:54:00:f5:6c:11", ip: "192.168.72.54"} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:32.933970   66293 main.go:141] libmachine: (no-preload-658545) Reserved static IP address: 192.168.72.54
	I1004 04:24:32.933988   66293 main.go:141] libmachine: (no-preload-658545) DBG | skip adding static IP to network mk-no-preload-658545 - found existing host DHCP lease matching {name: "no-preload-658545", mac: "52:54:00:f5:6c:11", ip: "192.168.72.54"}
	I1004 04:24:32.934002   66293 main.go:141] libmachine: (no-preload-658545) DBG | Getting to WaitForSSH function...
	I1004 04:24:32.934016   66293 main.go:141] libmachine: (no-preload-658545) Waiting for SSH to be available...
	I1004 04:24:32.936089   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.936440   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:32.936471   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.936572   66293 main.go:141] libmachine: (no-preload-658545) DBG | Using SSH client type: external
	I1004 04:24:32.936599   66293 main.go:141] libmachine: (no-preload-658545) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa (-rw-------)
	I1004 04:24:32.936637   66293 main.go:141] libmachine: (no-preload-658545) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:24:32.936650   66293 main.go:141] libmachine: (no-preload-658545) DBG | About to run SSH command:
	I1004 04:24:32.936661   66293 main.go:141] libmachine: (no-preload-658545) DBG | exit 0
	I1004 04:24:33.064432   66293 main.go:141] libmachine: (no-preload-658545) DBG | SSH cmd err, output: <nil>: 
	I1004 04:24:33.064791   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetConfigRaw
	I1004 04:24:33.065494   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:33.068038   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.068302   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.068325   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.068580   66293 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/config.json ...
	I1004 04:24:33.068837   66293 machine.go:93] provisionDockerMachine start ...
	I1004 04:24:33.068858   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:33.069072   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.071425   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.071748   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.071819   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.071946   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.072166   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.072332   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.072429   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.072587   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.072799   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.072814   66293 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:24:33.184623   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:24:33.184656   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.184912   66293 buildroot.go:166] provisioning hostname "no-preload-658545"
	I1004 04:24:33.184946   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.185126   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.188804   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.189189   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.189222   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.189419   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.189664   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.189839   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.190002   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.190128   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.190300   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.190313   66293 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-658545 && echo "no-preload-658545" | sudo tee /etc/hostname
	I1004 04:24:33.316349   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-658545
	
	I1004 04:24:33.316381   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.319460   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.319908   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.319945   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.320110   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.320301   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.320475   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.320628   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.320811   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.321031   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.321058   66293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-658545' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-658545/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-658545' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:24:28.874265   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:29.374364   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:29.874581   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:30.373909   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:30.874089   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.374708   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.874696   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:32.374061   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:32.874233   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:33.374290   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.050105   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:33.549870   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:33.444185   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:24:33.444221   66293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:24:33.444246   66293 buildroot.go:174] setting up certificates
	I1004 04:24:33.444257   66293 provision.go:84] configureAuth start
	I1004 04:24:33.444273   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.444569   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:33.447726   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.448137   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.448168   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.448332   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.450903   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.451311   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.451340   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.451479   66293 provision.go:143] copyHostCerts
	I1004 04:24:33.451559   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:24:33.451571   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:24:33.451638   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:24:33.451748   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:24:33.451763   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:24:33.451818   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:24:33.451897   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:24:33.451906   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:24:33.451931   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:24:33.451992   66293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.no-preload-658545 san=[127.0.0.1 192.168.72.54 localhost minikube no-preload-658545]
	I1004 04:24:33.577106   66293 provision.go:177] copyRemoteCerts
	I1004 04:24:33.577160   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:24:33.577183   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.579990   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.580330   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.580359   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.580496   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.580672   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.580810   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.580937   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:33.671123   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:24:33.697805   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1004 04:24:33.725408   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:24:33.751285   66293 provision.go:87] duration metric: took 307.010531ms to configureAuth
	I1004 04:24:33.751315   66293 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:24:33.751553   66293 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:33.751651   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.754476   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.754896   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.754938   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.755087   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.755282   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.755450   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.755592   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.755723   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.755969   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.755987   66293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:24:33.996596   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:24:33.996625   66293 machine.go:96] duration metric: took 927.772762ms to provisionDockerMachine
	I1004 04:24:33.996636   66293 start.go:293] postStartSetup for "no-preload-658545" (driver="kvm2")
	I1004 04:24:33.996645   66293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:24:33.996662   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:33.996958   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:24:33.996981   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.999632   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.000082   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.000111   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.000324   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.000537   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.000733   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.000924   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.089338   66293 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:24:34.094278   66293 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:24:34.094303   66293 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:24:34.094377   66293 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:24:34.094468   66293 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:24:34.094597   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:24:34.105335   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:34.134191   66293 start.go:296] duration metric: took 137.541908ms for postStartSetup
	I1004 04:24:34.134243   66293 fix.go:56] duration metric: took 19.393079344s for fixHost
	I1004 04:24:34.134269   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.137227   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.137599   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.137638   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.137779   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.137978   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.138156   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.138289   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.138459   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:34.138652   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:34.138663   66293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:24:34.250671   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015874.218795126
	
	I1004 04:24:34.250699   66293 fix.go:216] guest clock: 1728015874.218795126
	I1004 04:24:34.250709   66293 fix.go:229] Guest: 2024-10-04 04:24:34.218795126 +0000 UTC Remote: 2024-10-04 04:24:34.134249208 +0000 UTC m=+355.755571497 (delta=84.545918ms)
	I1004 04:24:34.250735   66293 fix.go:200] guest clock delta is within tolerance: 84.545918ms
	I1004 04:24:34.250742   66293 start.go:83] releasing machines lock for "no-preload-658545", held for 19.509615446s
	I1004 04:24:34.250763   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.250965   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:34.254332   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.254720   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.254746   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.254982   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255550   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255745   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255843   66293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:24:34.255907   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.255973   66293 ssh_runner.go:195] Run: cat /version.json
	I1004 04:24:34.255996   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.258802   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259036   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259118   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.259143   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259309   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.259487   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.259538   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.259563   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259633   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.259752   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.259845   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.259891   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.260042   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.260180   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.362345   66293 ssh_runner.go:195] Run: systemctl --version
	I1004 04:24:34.368641   66293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:24:34.527679   66293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:24:34.534212   66293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:24:34.534291   66293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:24:34.553539   66293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:24:34.553570   66293 start.go:495] detecting cgroup driver to use...
	I1004 04:24:34.553638   66293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:24:34.573489   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:24:34.588220   66293 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:24:34.588281   66293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:24:34.606014   66293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:24:34.621246   66293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:24:34.749423   66293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:24:34.915880   66293 docker.go:233] disabling docker service ...
	I1004 04:24:34.915960   66293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:24:34.936625   66293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:24:34.951534   66293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:24:35.089398   66293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:24:35.225269   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:24:35.241006   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:24:35.261586   66293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:24:35.261651   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.273501   66293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:24:35.273571   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.285392   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.296475   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.307774   66293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:24:35.319241   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.330361   66293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.349013   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.360603   66293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:24:35.371516   66293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:24:35.371581   66293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:24:35.387209   66293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:24:35.398144   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:35.528196   66293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:24:35.629120   66293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:24:35.629198   66293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:24:35.634243   66293 start.go:563] Will wait 60s for crictl version
	I1004 04:24:35.634307   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:35.638372   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:24:35.678659   66293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:24:35.678763   66293 ssh_runner.go:195] Run: crio --version
	I1004 04:24:35.715285   66293 ssh_runner.go:195] Run: crio --version
	I1004 04:24:35.751571   66293 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:24:34.228500   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:36.727080   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:37.228706   67541 node_ready.go:49] node "default-k8s-diff-port-281471" has status "Ready":"True"
	I1004 04:24:37.228745   67541 node_ready.go:38] duration metric: took 7.005123712s for node "default-k8s-diff-port-281471" to be "Ready" ...
	I1004 04:24:37.228760   67541 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:37.235256   67541 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:35.752737   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:35.755375   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:35.755763   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:35.755818   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:35.756063   66293 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1004 04:24:35.760601   66293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:35.773870   66293 kubeadm.go:883] updating cluster {Name:no-preload-658545 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:24:35.773970   66293 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:24:35.774001   66293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:35.813619   66293 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:24:35.813650   66293 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:24:35.813736   66293 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:35.813756   66293 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.813785   66293 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1004 04:24:35.813796   66293 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.813877   66293 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:35.813740   66293 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.813758   66293 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.813771   66293 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.815271   66293 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:35.815277   66293 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1004 04:24:35.815292   66293 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.815271   66293 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.815276   66293 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:35.815353   66293 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.815358   66293 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.815402   66293 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.956470   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.963066   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.965110   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.970080   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.972477   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.988253   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.013802   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1004 04:24:36.063322   66293 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1004 04:24:36.063364   66293 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.063405   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214786   66293 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1004 04:24:36.214827   66293 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.214867   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214928   66293 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1004 04:24:36.214961   66293 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1004 04:24:36.214995   66293 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.215023   66293 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1004 04:24:36.215043   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214965   66293 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.215081   66293 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1004 04:24:36.215047   66293 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.215100   66293 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.215110   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.215139   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.215147   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.274103   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.274103   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.274185   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.274292   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.274329   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.274343   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.392523   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.405236   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.405257   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.408799   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.408857   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.408860   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.511001   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.568598   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.568658   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.568720   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.568929   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.569021   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.599594   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1004 04:24:36.599733   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.696242   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1004 04:24:36.696294   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1004 04:24:36.696336   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1004 04:24:36.696363   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:36.696390   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:36.696399   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:36.696401   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1004 04:24:36.696449   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1004 04:24:36.696507   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:36.696521   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:36.696508   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1004 04:24:36.696563   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.696613   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.701522   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1004 04:24:37.132809   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:33.874344   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:34.374158   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:34.873848   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:35.373944   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:35.874697   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.373831   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.874231   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:37.374723   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:37.873861   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:38.374206   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.050420   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:38.051653   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:39.242026   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:41.244977   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:39.289977   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.593422519s)
	I1004 04:24:39.290020   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1004 04:24:39.290087   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.593446646s)
	I1004 04:24:39.290114   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1004 04:24:39.290136   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:39.290158   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.593739386s)
	I1004 04:24:39.290175   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1004 04:24:39.290097   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.593563637s)
	I1004 04:24:39.290203   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.593795645s)
	I1004 04:24:39.290208   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:39.290213   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1004 04:24:39.290213   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1004 04:24:39.290265   66293 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.157417466s)
	I1004 04:24:39.290314   66293 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1004 04:24:39.290348   66293 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:39.290392   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:40.750955   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.460708297s)
	I1004 04:24:40.751065   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1004 04:24:40.751102   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:40.750969   66293 ssh_runner.go:235] Completed: which crictl: (1.460561899s)
	I1004 04:24:40.751159   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:40.751190   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:43.031349   66293 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.280136047s)
	I1004 04:24:43.031395   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.280209115s)
	I1004 04:24:43.031566   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1004 04:24:43.031493   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:43.031600   66293 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:43.031641   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:43.084191   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:38.873705   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:39.374361   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:39.874144   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.373793   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.873796   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:41.374744   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:41.874442   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:42.374561   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:42.874638   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:43.374677   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.548818   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:42.550744   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:43.742554   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:44.244427   67541 pod_ready.go:93] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.244453   67541 pod_ready.go:82] duration metric: took 7.009169057s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.244463   67541 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.250595   67541 pod_ready.go:93] pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.250617   67541 pod_ready.go:82] duration metric: took 6.147481ms for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.250625   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.256537   67541 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.256570   67541 pod_ready.go:82] duration metric: took 5.936641ms for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.256583   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.262681   67541 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.262707   67541 pod_ready.go:82] duration metric: took 6.115804ms for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.262721   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.271089   67541 pod_ready.go:93] pod "kube-proxy-4nnld" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.271124   67541 pod_ready.go:82] duration metric: took 8.394207ms for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.271138   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.640124   67541 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.640158   67541 pod_ready.go:82] duration metric: took 369.009816ms for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.640172   67541 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:46.647420   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:45.132971   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.101305613s)
	I1004 04:24:45.133043   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1004 04:24:45.133071   66293 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.048844025s)
	I1004 04:24:45.133079   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:45.133110   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1004 04:24:45.133135   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:45.133179   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:47.228047   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.094844592s)
	I1004 04:24:47.228087   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1004 04:24:47.228089   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.0949275s)
	I1004 04:24:47.228119   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1004 04:24:47.228154   66293 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:47.228214   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:43.874583   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:44.374117   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:44.874398   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.374755   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.874039   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:46.374598   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:46.874446   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:47.374384   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:47.874596   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:48.374021   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.049760   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:47.551861   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:48.647700   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:50.648288   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:52.649288   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:50.627043   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.398805191s)
	I1004 04:24:50.627085   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1004 04:24:50.627122   66293 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:50.627191   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:51.282056   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1004 04:24:51.282099   66293 cache_images.go:123] Successfully loaded all cached images
	I1004 04:24:51.282104   66293 cache_images.go:92] duration metric: took 15.468441268s to LoadCachedImages
	I1004 04:24:51.282116   66293 kubeadm.go:934] updating node { 192.168.72.54 8443 v1.31.1 crio true true} ...
	I1004 04:24:51.282243   66293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-658545 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:51.282321   66293 ssh_runner.go:195] Run: crio config
	I1004 04:24:51.333133   66293 cni.go:84] Creating CNI manager for ""
	I1004 04:24:51.333162   66293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:51.333173   66293 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:51.333201   66293 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-658545 NodeName:no-preload-658545 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:24:51.333361   66293 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-658545"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:51.333419   66293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:24:51.344694   66293 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:51.344757   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:51.354990   66293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1004 04:24:51.372572   66293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:51.394129   66293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1004 04:24:51.412865   66293 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:51.416985   66293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:51.430835   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:51.559349   66293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:51.579093   66293 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545 for IP: 192.168.72.54
	I1004 04:24:51.579120   66293 certs.go:194] generating shared ca certs ...
	I1004 04:24:51.579140   66293 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:51.579318   66293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:51.579378   66293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:51.579391   66293 certs.go:256] generating profile certs ...
	I1004 04:24:51.579494   66293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/client.key
	I1004 04:24:51.579588   66293 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.key.10ceac04
	I1004 04:24:51.579648   66293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.key
	I1004 04:24:51.579808   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:51.579849   66293 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:51.579861   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:51.579891   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:51.579926   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:51.579961   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:51.580018   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:51.580871   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:51.630190   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:51.667887   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:51.715372   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:51.750063   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1004 04:24:51.776606   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:24:51.808943   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:51.839165   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:24:51.867862   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:51.898026   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:51.926810   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:51.955416   66293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:51.977621   66293 ssh_runner.go:195] Run: openssl version
	I1004 04:24:51.984023   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:51.997672   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.002969   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.003039   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.009473   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:52.021001   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:52.032834   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.037679   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.037742   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.044012   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:52.055377   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:52.066222   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.070747   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.070794   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.076922   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:52.087952   66293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:52.093052   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:52.099710   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:52.105841   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:52.112092   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:52.118428   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:52.125380   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:52.132085   66293 kubeadm.go:392] StartCluster: {Name:no-preload-658545 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:52.132193   66293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:52.132254   66293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:52.171814   66293 cri.go:89] found id: ""
	I1004 04:24:52.171882   66293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:52.182484   66293 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:52.182508   66293 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:52.182559   66293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:52.193069   66293 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:52.194108   66293 kubeconfig.go:125] found "no-preload-658545" server: "https://192.168.72.54:8443"
	I1004 04:24:52.196237   66293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:52.206551   66293 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I1004 04:24:52.206584   66293 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:52.206598   66293 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:52.206657   66293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:52.249698   66293 cri.go:89] found id: ""
	I1004 04:24:52.249762   66293 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:52.266001   66293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:52.276056   66293 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:52.276081   66293 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:52.276128   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:24:52.285610   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:52.285677   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:52.295177   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:24:52.304309   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:52.304362   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:52.314126   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:24:52.323562   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:52.323618   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:52.332906   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:24:52.342199   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:52.342252   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:52.351661   66293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:52.361071   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:52.493171   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:48.874471   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:49.374480   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:49.874689   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.373726   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.874543   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:51.373743   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:51.874513   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:52.374719   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:52.874305   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:53.374419   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.049668   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:52.050522   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:55.147282   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:57.648169   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:53.586422   66293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.093219868s)
	I1004 04:24:53.586448   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:53.794085   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:53.872327   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:54.004418   66293 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:54.004510   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.505463   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.004602   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.036834   66293 api_server.go:72] duration metric: took 1.032414365s to wait for apiserver process to appear ...
	I1004 04:24:55.036858   66293 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:24:55.036877   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:55.037325   66293 api_server.go:269] stopped: https://192.168.72.54:8443/healthz: Get "https://192.168.72.54:8443/healthz": dial tcp 192.168.72.54:8443: connect: connection refused
	I1004 04:24:55.537513   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:57.951637   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:57.951663   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:57.951676   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.010162   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:58.010188   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:58.037484   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.060069   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:58.060161   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:53.874725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.373903   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.874127   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.374051   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.874019   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:56.373828   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:56.874027   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:57.373914   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:57.874598   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:58.374106   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.550080   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:56.550541   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:59.051837   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:58.536932   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.541611   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:58.541634   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:59.037723   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:59.057378   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:59.057411   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:59.536994   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:59.545827   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I1004 04:24:59.554199   66293 api_server.go:141] control plane version: v1.31.1
	I1004 04:24:59.554238   66293 api_server.go:131] duration metric: took 4.517373336s to wait for apiserver health ...
	I1004 04:24:59.554247   66293 cni.go:84] Creating CNI manager for ""
	I1004 04:24:59.554253   66293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:59.555912   66293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:24:59.557009   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:24:59.590146   66293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:24:59.610903   66293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:24:59.634067   66293 system_pods.go:59] 8 kube-system pods found
	I1004 04:24:59.634109   66293 system_pods.go:61] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:24:59.634121   66293 system_pods.go:61] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:24:59.634131   66293 system_pods.go:61] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:24:59.634143   66293 system_pods.go:61] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:24:59.634151   66293 system_pods.go:61] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 04:24:59.634160   66293 system_pods.go:61] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:24:59.634168   66293 system_pods.go:61] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:24:59.634181   66293 system_pods.go:61] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 04:24:59.634189   66293 system_pods.go:74] duration metric: took 23.257716ms to wait for pod list to return data ...
	I1004 04:24:59.634198   66293 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:24:59.638128   66293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:24:59.638160   66293 node_conditions.go:123] node cpu capacity is 2
	I1004 04:24:59.638173   66293 node_conditions.go:105] duration metric: took 3.969841ms to run NodePressure ...
	I1004 04:24:59.638191   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:59.968829   66293 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:24:59.975495   66293 kubeadm.go:739] kubelet initialised
	I1004 04:24:59.975516   66293 kubeadm.go:740] duration metric: took 6.660196ms waiting for restarted kubelet to initialise ...
	I1004 04:24:59.975522   66293 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:00.084084   66293 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.113474   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.113498   66293 pod_ready.go:82] duration metric: took 29.379607ms for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.113507   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.113513   66293 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.128436   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "etcd-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.128463   66293 pod_ready.go:82] duration metric: took 14.94278ms for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.128475   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "etcd-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.128485   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.140033   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-apiserver-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.140059   66293 pod_ready.go:82] duration metric: took 11.56545ms for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.140068   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-apiserver-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.140077   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.157254   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.157286   66293 pod_ready.go:82] duration metric: took 17.197805ms for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.157298   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.157306   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.415110   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-proxy-dvr6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.415141   66293 pod_ready.go:82] duration metric: took 257.824162ms for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.415151   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-proxy-dvr6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.415157   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.815201   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-scheduler-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.815226   66293 pod_ready.go:82] duration metric: took 400.063468ms for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.815235   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-scheduler-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.815241   66293 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:01.214416   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:01.214448   66293 pod_ready.go:82] duration metric: took 399.197779ms for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:01.214461   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:01.214468   66293 pod_ready.go:39] duration metric: took 1.238937842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:01.214484   66293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:25:01.227389   66293 ops.go:34] apiserver oom_adj: -16
	I1004 04:25:01.227414   66293 kubeadm.go:597] duration metric: took 9.044898439s to restartPrimaryControlPlane
	I1004 04:25:01.227424   66293 kubeadm.go:394] duration metric: took 9.095346513s to StartCluster
	I1004 04:25:01.227441   66293 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:25:01.227520   66293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:25:01.229057   66293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:25:01.229318   66293 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:25:01.229389   66293 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:25:01.229496   66293 addons.go:69] Setting storage-provisioner=true in profile "no-preload-658545"
	I1004 04:25:01.229505   66293 addons.go:69] Setting default-storageclass=true in profile "no-preload-658545"
	I1004 04:25:01.229512   66293 addons.go:234] Setting addon storage-provisioner=true in "no-preload-658545"
	W1004 04:25:01.229520   66293 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:25:01.229524   66293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-658545"
	I1004 04:25:01.229558   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.229562   66293 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:25:01.229557   66293 addons.go:69] Setting metrics-server=true in profile "no-preload-658545"
	I1004 04:25:01.229607   66293 addons.go:234] Setting addon metrics-server=true in "no-preload-658545"
	W1004 04:25:01.229621   66293 addons.go:243] addon metrics-server should already be in state true
	I1004 04:25:01.229655   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.229968   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.229987   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.229971   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.230013   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.230030   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.230133   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.231051   66293 out.go:177] * Verifying Kubernetes components...
	I1004 04:25:01.232578   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:25:01.256283   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38313
	I1004 04:25:01.256939   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.257689   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.257720   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.258124   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.258358   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.262593   66293 addons.go:234] Setting addon default-storageclass=true in "no-preload-658545"
	W1004 04:25:01.262620   66293 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:25:01.262652   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.263036   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.263117   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.274653   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I1004 04:25:01.275130   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.275655   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.275685   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.276062   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.276652   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.276697   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.277272   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I1004 04:25:01.277756   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.278175   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.278191   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.278548   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.279116   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.279163   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.283719   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1004 04:25:01.284316   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.284814   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.284836   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.285180   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.285751   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.285801   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.297682   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I1004 04:25:01.297859   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I1004 04:25:01.298298   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.298418   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.298975   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.298995   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.299058   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.299077   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.299407   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.299470   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.299618   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.299660   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.301552   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.302048   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.303197   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1004 04:25:01.303600   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.304053   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.304068   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.304124   66293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:25:01.304234   66293 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:25:01.304403   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.304571   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.305715   66293 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:25:01.305735   66293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:25:01.305850   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:25:01.305861   66293 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:25:01.305876   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.305752   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.306101   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.306321   66293 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:25:01.306334   66293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:25:01.306349   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.310374   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.310752   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.310776   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.310888   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.311057   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.311192   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.311272   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.311338   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.311603   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312049   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.312072   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312175   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.312201   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312302   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.312468   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.312497   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.312586   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.312658   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.312681   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.312811   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.312948   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.478533   66293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:25:01.511716   66293 node_ready.go:35] waiting up to 6m0s for node "no-preload-658545" to be "Ready" ...
	I1004 04:25:01.557879   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:25:01.574381   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:25:01.601090   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:25:01.601112   66293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:25:01.630465   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:25:01.630495   66293 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:25:01.681089   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:25:01.681118   66293 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:25:01.703024   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:25:02.053562   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.053585   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.053855   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.053871   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.053882   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.053891   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.054118   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.054139   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.054128   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.061624   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.061646   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.061949   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.061967   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.061985   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.580950   66293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.00653263s)
	I1004 04:25:02.581002   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.581014   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.581350   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.581368   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.581376   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.581382   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.581459   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.581594   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.581606   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.702713   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.702739   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.703015   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.703028   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.703090   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.703106   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.703117   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.703347   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.703363   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.703380   66293 addons.go:475] Verifying addon metrics-server=true in "no-preload-658545"
	I1004 04:25:02.705335   66293 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 04:24:59.648241   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:01.649424   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:02.706605   66293 addons.go:510] duration metric: took 1.477226s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 04:24:58.874143   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:59.373810   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:59.874682   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:00.374672   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:00.873725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.374175   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.874724   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:02.374725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:02.874746   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:03.373689   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.548783   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:03.549515   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:04.146633   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:06.147540   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.147626   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:03.516566   66293 node_ready.go:53] node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:06.022815   66293 node_ready.go:53] node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:03.874594   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:04.374498   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:04.874377   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:05.374050   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:05.374139   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:05.412153   67282 cri.go:89] found id: ""
	I1004 04:25:05.412185   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.412195   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:05.412202   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:05.412264   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:05.446725   67282 cri.go:89] found id: ""
	I1004 04:25:05.446750   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.446758   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:05.446763   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:05.446816   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:05.487652   67282 cri.go:89] found id: ""
	I1004 04:25:05.487678   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.487686   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:05.487691   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:05.487752   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:05.526275   67282 cri.go:89] found id: ""
	I1004 04:25:05.526302   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.526310   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:05.526319   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:05.526375   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:05.565004   67282 cri.go:89] found id: ""
	I1004 04:25:05.565034   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.565045   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:05.565052   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:05.565101   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:05.601963   67282 cri.go:89] found id: ""
	I1004 04:25:05.601990   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.601998   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:05.602003   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:05.602051   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:05.638621   67282 cri.go:89] found id: ""
	I1004 04:25:05.638651   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.638660   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:05.638666   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:05.638720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:05.678042   67282 cri.go:89] found id: ""
	I1004 04:25:05.678071   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.678082   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:05.678093   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:05.678107   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:05.720677   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:05.720707   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:05.775219   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:05.775252   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:05.789748   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:05.789774   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:05.918752   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:05.918783   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:05.918798   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:08.493206   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:05.551035   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.048870   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:10.148154   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.645708   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.516666   66293 node_ready.go:49] node "no-preload-658545" has status "Ready":"True"
	I1004 04:25:08.516690   66293 node_ready.go:38] duration metric: took 7.004939371s for node "no-preload-658545" to be "Ready" ...
	I1004 04:25:08.516699   66293 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:08.522101   66293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.527132   66293 pod_ready.go:93] pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:08.527153   66293 pod_ready.go:82] duration metric: took 5.024648ms for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.527162   66293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.534172   66293 pod_ready.go:93] pod "etcd-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:08.534195   66293 pod_ready.go:82] duration metric: took 7.027189ms for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.534204   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:10.541186   66293 pod_ready.go:103] pod "kube-apiserver-no-preload-658545" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.040607   66293 pod_ready.go:93] pod "kube-apiserver-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.040640   66293 pod_ready.go:82] duration metric: took 3.506428875s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.040654   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.045845   66293 pod_ready.go:93] pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.045870   66293 pod_ready.go:82] duration metric: took 5.207108ms for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.045883   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.051587   66293 pod_ready.go:93] pod "kube-proxy-dvr6b" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.051604   66293 pod_ready.go:82] duration metric: took 5.715328ms for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.051613   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.116361   66293 pod_ready.go:93] pod "kube-scheduler-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.116401   66293 pod_ready.go:82] duration metric: took 64.774234ms for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.116411   66293 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.506490   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:08.506549   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:08.545875   67282 cri.go:89] found id: ""
	I1004 04:25:08.545909   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.545920   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:08.545933   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:08.545997   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:08.582348   67282 cri.go:89] found id: ""
	I1004 04:25:08.582375   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.582383   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:08.582389   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:08.582438   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:08.637763   67282 cri.go:89] found id: ""
	I1004 04:25:08.637797   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.637809   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:08.637816   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:08.637890   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:08.681171   67282 cri.go:89] found id: ""
	I1004 04:25:08.681205   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.681216   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:08.681224   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:08.681289   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:08.719513   67282 cri.go:89] found id: ""
	I1004 04:25:08.719542   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.719549   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:08.719555   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:08.719607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:08.762152   67282 cri.go:89] found id: ""
	I1004 04:25:08.762175   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.762183   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:08.762188   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:08.762251   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:08.799857   67282 cri.go:89] found id: ""
	I1004 04:25:08.799881   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.799892   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:08.799903   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:08.799954   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:08.835264   67282 cri.go:89] found id: ""
	I1004 04:25:08.835296   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.835308   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:08.835318   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:08.835330   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:08.875501   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:08.875532   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:08.929145   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:08.929178   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:08.942769   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:08.942808   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:09.025372   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:09.025401   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:09.025416   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:11.611179   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:11.625118   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:11.625253   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:11.661512   67282 cri.go:89] found id: ""
	I1004 04:25:11.661540   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.661547   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:11.661553   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:11.661607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:11.704902   67282 cri.go:89] found id: ""
	I1004 04:25:11.704931   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.704941   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:11.704948   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:11.705007   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:11.741747   67282 cri.go:89] found id: ""
	I1004 04:25:11.741770   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.741780   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:11.741787   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:11.741841   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:11.776838   67282 cri.go:89] found id: ""
	I1004 04:25:11.776863   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.776871   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:11.776876   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:11.776927   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:11.812996   67282 cri.go:89] found id: ""
	I1004 04:25:11.813024   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.813033   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:11.813038   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:11.813097   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:11.853718   67282 cri.go:89] found id: ""
	I1004 04:25:11.853744   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.853752   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:11.853758   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:11.853813   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:11.896840   67282 cri.go:89] found id: ""
	I1004 04:25:11.896867   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.896879   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:11.896885   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:11.896943   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:11.932529   67282 cri.go:89] found id: ""
	I1004 04:25:11.932552   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.932561   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:11.932569   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:11.932580   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:11.946504   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:11.946538   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:12.024692   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:12.024713   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:12.024724   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:12.111942   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:12.111976   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:12.156483   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:12.156522   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:10.049912   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.051024   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.646058   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:16.647214   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.123343   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:16.622947   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.708243   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:14.722943   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:14.723007   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:14.758502   67282 cri.go:89] found id: ""
	I1004 04:25:14.758555   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.758567   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:14.758575   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:14.758633   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:14.796496   67282 cri.go:89] found id: ""
	I1004 04:25:14.796525   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.796532   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:14.796538   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:14.796595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:14.832216   67282 cri.go:89] found id: ""
	I1004 04:25:14.832247   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.832259   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:14.832266   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:14.832330   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:14.868461   67282 cri.go:89] found id: ""
	I1004 04:25:14.868491   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.868501   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:14.868509   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:14.868568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:14.909827   67282 cri.go:89] found id: ""
	I1004 04:25:14.909857   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.909867   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:14.909875   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:14.909949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:14.947809   67282 cri.go:89] found id: ""
	I1004 04:25:14.947839   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.947850   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:14.947857   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:14.947904   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:14.984073   67282 cri.go:89] found id: ""
	I1004 04:25:14.984101   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.984110   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:14.984115   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:14.984170   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:15.021145   67282 cri.go:89] found id: ""
	I1004 04:25:15.021179   67282 logs.go:282] 0 containers: []
	W1004 04:25:15.021191   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:15.021204   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:15.021217   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:15.075295   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:15.075328   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:15.088953   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:15.088980   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:15.175103   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:15.175128   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:15.175143   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:15.259004   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:15.259044   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:17.825029   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:17.839496   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:17.839574   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:17.877643   67282 cri.go:89] found id: ""
	I1004 04:25:17.877673   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.877684   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:17.877692   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:17.877751   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:17.921534   67282 cri.go:89] found id: ""
	I1004 04:25:17.921563   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.921574   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:17.921581   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:17.921634   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:17.961281   67282 cri.go:89] found id: ""
	I1004 04:25:17.961307   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.961315   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:17.961320   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:17.961386   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:18.001036   67282 cri.go:89] found id: ""
	I1004 04:25:18.001066   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.001078   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:18.001085   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:18.001156   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:18.043212   67282 cri.go:89] found id: ""
	I1004 04:25:18.043241   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.043252   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:18.043259   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:18.043319   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:18.082399   67282 cri.go:89] found id: ""
	I1004 04:25:18.082423   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.082430   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:18.082435   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:18.082493   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:18.120507   67282 cri.go:89] found id: ""
	I1004 04:25:18.120534   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.120544   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:18.120550   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:18.120605   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:18.156601   67282 cri.go:89] found id: ""
	I1004 04:25:18.156629   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.156640   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:18.156650   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:18.156663   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:18.198393   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:18.198424   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:18.250992   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:18.251032   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:18.267984   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:18.268015   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:18.343283   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:18.343303   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:18.343314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:14.549511   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:17.048940   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:19.051125   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:18.648462   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:21.146813   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.147244   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:18.624165   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:20.627159   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.123629   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:20.922578   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:20.938037   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:20.938122   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:20.978389   67282 cri.go:89] found id: ""
	I1004 04:25:20.978417   67282 logs.go:282] 0 containers: []
	W1004 04:25:20.978426   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:20.978431   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:20.978478   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:21.033490   67282 cri.go:89] found id: ""
	I1004 04:25:21.033520   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.033528   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:21.033533   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:21.033589   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:21.087168   67282 cri.go:89] found id: ""
	I1004 04:25:21.087198   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.087209   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:21.087216   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:21.087299   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:21.144327   67282 cri.go:89] found id: ""
	I1004 04:25:21.144356   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.144366   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:21.144373   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:21.144431   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:21.183336   67282 cri.go:89] found id: ""
	I1004 04:25:21.183378   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.183390   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:21.183397   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:21.183459   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:21.221847   67282 cri.go:89] found id: ""
	I1004 04:25:21.221878   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.221892   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:21.221901   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:21.221961   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:21.258542   67282 cri.go:89] found id: ""
	I1004 04:25:21.258573   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.258584   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:21.258590   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:21.258652   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:21.303173   67282 cri.go:89] found id: ""
	I1004 04:25:21.303202   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.303211   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:21.303218   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:21.303243   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:21.358109   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:21.358146   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:21.373958   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:21.373987   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:21.450956   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:21.450980   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:21.451006   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:21.534763   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:21.534807   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:21.550109   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.550304   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:25.148868   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:27.647698   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:25.622123   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:27.624777   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:24.082856   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:24.098263   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:24.098336   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:24.144969   67282 cri.go:89] found id: ""
	I1004 04:25:24.144999   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.145009   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:24.145015   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:24.145072   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:24.185670   67282 cri.go:89] found id: ""
	I1004 04:25:24.185693   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.185702   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:24.185708   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:24.185769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:24.223657   67282 cri.go:89] found id: ""
	I1004 04:25:24.223691   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.223703   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:24.223710   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:24.223769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:24.261841   67282 cri.go:89] found id: ""
	I1004 04:25:24.261864   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.261872   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:24.261878   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:24.261938   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:24.299734   67282 cri.go:89] found id: ""
	I1004 04:25:24.299758   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.299769   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:24.299775   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:24.299867   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:24.337413   67282 cri.go:89] found id: ""
	I1004 04:25:24.337440   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.337450   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:24.337457   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:24.337523   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:24.375963   67282 cri.go:89] found id: ""
	I1004 04:25:24.375995   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.376007   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:24.376014   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:24.376073   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:24.415978   67282 cri.go:89] found id: ""
	I1004 04:25:24.416010   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.416021   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:24.416030   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:24.416045   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:24.458703   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:24.458738   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:24.510669   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:24.510704   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:24.525646   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:24.525687   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:24.603280   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:24.603310   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:24.603324   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:27.184935   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:27.200241   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:27.200321   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:27.237546   67282 cri.go:89] found id: ""
	I1004 04:25:27.237576   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.237588   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:27.237596   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:27.237653   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:27.272598   67282 cri.go:89] found id: ""
	I1004 04:25:27.272625   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.272634   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:27.272642   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:27.272700   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:27.306659   67282 cri.go:89] found id: ""
	I1004 04:25:27.306693   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.306706   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:27.306715   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:27.306779   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:27.344315   67282 cri.go:89] found id: ""
	I1004 04:25:27.344349   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.344363   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:27.344370   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:27.344428   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:27.380231   67282 cri.go:89] found id: ""
	I1004 04:25:27.380267   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.380278   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:27.380286   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:27.380346   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:27.418137   67282 cri.go:89] found id: ""
	I1004 04:25:27.418161   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.418169   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:27.418174   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:27.418225   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:27.458235   67282 cri.go:89] found id: ""
	I1004 04:25:27.458262   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.458283   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:27.458289   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:27.458342   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:27.495161   67282 cri.go:89] found id: ""
	I1004 04:25:27.495189   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.495198   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:27.495206   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:27.495217   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:27.547749   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:27.547795   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:27.563322   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:27.563355   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:27.636682   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:27.636710   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:27.636725   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:27.711316   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:27.711354   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:26.050001   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:28.548322   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.147210   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:32.647027   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.122267   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:32.122501   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.250361   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:30.265789   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:30.265866   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:30.305127   67282 cri.go:89] found id: ""
	I1004 04:25:30.305166   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.305183   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:30.305190   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:30.305258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:30.346529   67282 cri.go:89] found id: ""
	I1004 04:25:30.346560   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.346570   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:30.346577   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:30.346641   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:30.387368   67282 cri.go:89] found id: ""
	I1004 04:25:30.387407   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.387418   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:30.387425   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:30.387489   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:30.428193   67282 cri.go:89] found id: ""
	I1004 04:25:30.428230   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.428242   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:30.428248   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:30.428308   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:30.465484   67282 cri.go:89] found id: ""
	I1004 04:25:30.465509   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.465518   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:30.465523   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:30.465573   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:30.501133   67282 cri.go:89] found id: ""
	I1004 04:25:30.501163   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.501174   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:30.501181   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:30.501248   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:30.536492   67282 cri.go:89] found id: ""
	I1004 04:25:30.536519   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.536530   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:30.536536   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:30.536587   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:30.571721   67282 cri.go:89] found id: ""
	I1004 04:25:30.571745   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.571753   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:30.571761   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:30.571771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:30.626922   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:30.626958   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:30.641817   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:30.641852   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:30.725604   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:30.725633   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:30.725647   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:30.800359   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:30.800393   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:33.340747   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:33.355862   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:33.355936   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:33.397628   67282 cri.go:89] found id: ""
	I1004 04:25:33.397655   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.397662   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:33.397668   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:33.397718   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:33.442100   67282 cri.go:89] found id: ""
	I1004 04:25:33.442128   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.442137   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:33.442142   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:33.442187   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:33.481035   67282 cri.go:89] found id: ""
	I1004 04:25:33.481063   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.481076   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:33.481083   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:33.481149   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:30.549816   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:33.048791   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:35.147125   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:37.647224   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:34.122573   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:36.622639   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:33.516633   67282 cri.go:89] found id: ""
	I1004 04:25:33.516661   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.516669   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:33.516677   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:33.516727   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:33.556569   67282 cri.go:89] found id: ""
	I1004 04:25:33.556600   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.556610   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:33.556617   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:33.556679   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:33.591678   67282 cri.go:89] found id: ""
	I1004 04:25:33.591715   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.591724   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:33.591731   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:33.591786   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:33.626571   67282 cri.go:89] found id: ""
	I1004 04:25:33.626594   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.626602   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:33.626607   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:33.626650   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:33.664336   67282 cri.go:89] found id: ""
	I1004 04:25:33.664359   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.664367   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:33.664375   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:33.664386   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:33.748013   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:33.748047   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:33.786730   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:33.786767   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:33.839355   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:33.839392   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:33.853807   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:33.853835   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:33.920183   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:36.420485   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:36.435150   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:36.435221   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:36.471818   67282 cri.go:89] found id: ""
	I1004 04:25:36.471842   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.471850   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:36.471855   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:36.471908   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:36.511469   67282 cri.go:89] found id: ""
	I1004 04:25:36.511496   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.511504   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:36.511509   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:36.511557   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:36.552607   67282 cri.go:89] found id: ""
	I1004 04:25:36.552633   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.552641   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:36.552646   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:36.552702   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:36.596260   67282 cri.go:89] found id: ""
	I1004 04:25:36.596282   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.596290   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:36.596295   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:36.596340   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:36.636674   67282 cri.go:89] found id: ""
	I1004 04:25:36.636700   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.636708   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:36.636713   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:36.636764   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:36.675155   67282 cri.go:89] found id: ""
	I1004 04:25:36.675194   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.675206   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:36.675214   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:36.675279   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:36.713458   67282 cri.go:89] found id: ""
	I1004 04:25:36.713485   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.713493   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:36.713498   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:36.713552   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:36.754567   67282 cri.go:89] found id: ""
	I1004 04:25:36.754596   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.754607   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:36.754618   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:36.754631   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:36.824413   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:36.824439   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:36.824453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:36.900438   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:36.900471   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:36.942238   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:36.942264   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:36.992527   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:36.992556   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:35.050546   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:37.548965   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:39.647505   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:42.146720   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:38.623559   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:41.121785   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:43.122437   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:39.506599   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:39.520782   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:39.520854   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:39.561853   67282 cri.go:89] found id: ""
	I1004 04:25:39.561880   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.561891   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:39.561898   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:39.561955   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:39.597548   67282 cri.go:89] found id: ""
	I1004 04:25:39.597581   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.597591   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:39.597598   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:39.597659   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:39.634481   67282 cri.go:89] found id: ""
	I1004 04:25:39.634517   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.634525   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:39.634530   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:39.634575   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:39.677077   67282 cri.go:89] found id: ""
	I1004 04:25:39.677107   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.677117   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:39.677124   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:39.677185   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:39.716334   67282 cri.go:89] found id: ""
	I1004 04:25:39.716356   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.716364   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:39.716369   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:39.716416   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:39.754765   67282 cri.go:89] found id: ""
	I1004 04:25:39.754792   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.754803   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:39.754810   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:39.754863   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:39.788782   67282 cri.go:89] found id: ""
	I1004 04:25:39.788811   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.788824   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:39.788832   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:39.788890   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:39.821946   67282 cri.go:89] found id: ""
	I1004 04:25:39.821970   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.821979   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:39.821988   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:39.822001   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:39.892629   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:39.892657   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:39.892674   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:39.973480   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:39.973515   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:40.018175   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:40.018203   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:40.068585   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:40.068620   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:42.583639   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:42.597249   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:42.597333   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:42.631993   67282 cri.go:89] found id: ""
	I1004 04:25:42.632020   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.632030   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:42.632037   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:42.632091   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:42.669708   67282 cri.go:89] found id: ""
	I1004 04:25:42.669739   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.669749   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:42.669762   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:42.669836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:42.705995   67282 cri.go:89] found id: ""
	I1004 04:25:42.706019   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.706030   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:42.706037   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:42.706094   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:42.740436   67282 cri.go:89] found id: ""
	I1004 04:25:42.740458   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.740466   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:42.740472   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:42.740524   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:42.774516   67282 cri.go:89] found id: ""
	I1004 04:25:42.774546   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.774557   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:42.774564   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:42.774614   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:42.807471   67282 cri.go:89] found id: ""
	I1004 04:25:42.807502   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.807510   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:42.807516   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:42.807561   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:42.851943   67282 cri.go:89] found id: ""
	I1004 04:25:42.851968   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.851977   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:42.851983   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:42.852040   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:42.887762   67282 cri.go:89] found id: ""
	I1004 04:25:42.887801   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.887812   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:42.887822   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:42.887834   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:42.960398   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:42.960423   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:42.960440   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:43.040078   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:43.040117   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:43.081614   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:43.081638   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:43.132744   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:43.132781   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:39.551722   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:42.049418   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:44.049835   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:44.646919   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:47.146884   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:45.622878   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.122299   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:45.647332   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:45.660765   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:45.660834   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:45.696351   67282 cri.go:89] found id: ""
	I1004 04:25:45.696379   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.696390   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:45.696397   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:45.696449   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:45.738529   67282 cri.go:89] found id: ""
	I1004 04:25:45.738553   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.738561   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:45.738566   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:45.738621   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:45.773071   67282 cri.go:89] found id: ""
	I1004 04:25:45.773094   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.773103   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:45.773110   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:45.773165   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:45.810813   67282 cri.go:89] found id: ""
	I1004 04:25:45.810840   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.810852   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:45.810859   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:45.810913   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:45.848916   67282 cri.go:89] found id: ""
	I1004 04:25:45.848942   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.848951   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:45.848956   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:45.849014   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:45.886737   67282 cri.go:89] found id: ""
	I1004 04:25:45.886763   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.886772   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:45.886778   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:45.886825   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:45.922263   67282 cri.go:89] found id: ""
	I1004 04:25:45.922291   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.922301   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:45.922307   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:45.922364   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:45.956688   67282 cri.go:89] found id: ""
	I1004 04:25:45.956710   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.956718   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:45.956725   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:45.956737   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:46.007334   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:46.007365   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:46.020892   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:46.020916   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:46.089786   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:46.089809   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:46.089822   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:46.175987   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:46.176017   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:46.549153   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.549893   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:49.147322   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:51.647365   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:50.622540   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:52.623714   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.718354   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:48.733291   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:48.733347   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:48.769149   67282 cri.go:89] found id: ""
	I1004 04:25:48.769175   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.769185   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:48.769193   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:48.769249   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:48.804386   67282 cri.go:89] found id: ""
	I1004 04:25:48.804410   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.804418   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:48.804423   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:48.804467   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:48.841747   67282 cri.go:89] found id: ""
	I1004 04:25:48.841774   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.841782   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:48.841788   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:48.841836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:48.880025   67282 cri.go:89] found id: ""
	I1004 04:25:48.880048   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.880058   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:48.880064   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:48.880121   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:48.916506   67282 cri.go:89] found id: ""
	I1004 04:25:48.916530   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.916540   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:48.916547   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:48.916607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:48.952082   67282 cri.go:89] found id: ""
	I1004 04:25:48.952105   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.952116   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:48.952122   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:48.952177   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:48.986097   67282 cri.go:89] found id: ""
	I1004 04:25:48.986124   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.986135   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:48.986143   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:48.986210   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:49.020400   67282 cri.go:89] found id: ""
	I1004 04:25:49.020428   67282 logs.go:282] 0 containers: []
	W1004 04:25:49.020436   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:49.020445   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:49.020462   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:49.074724   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:49.074754   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:49.088504   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:49.088529   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:49.165940   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:49.165961   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:49.165972   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:49.244482   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:49.244519   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:51.786086   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:51.800644   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:51.800720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:51.839951   67282 cri.go:89] found id: ""
	I1004 04:25:51.839980   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.839990   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:51.839997   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:51.840055   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:51.878660   67282 cri.go:89] found id: ""
	I1004 04:25:51.878684   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.878695   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:51.878701   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:51.878762   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:51.916640   67282 cri.go:89] found id: ""
	I1004 04:25:51.916665   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.916672   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:51.916678   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:51.916725   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:51.953800   67282 cri.go:89] found id: ""
	I1004 04:25:51.953827   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.953835   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:51.953840   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:51.953897   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:51.993107   67282 cri.go:89] found id: ""
	I1004 04:25:51.993139   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.993150   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:51.993157   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:51.993214   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:52.027426   67282 cri.go:89] found id: ""
	I1004 04:25:52.027454   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.027464   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:52.027470   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:52.027521   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:52.063608   67282 cri.go:89] found id: ""
	I1004 04:25:52.063638   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.063650   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:52.063657   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:52.063717   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:52.100052   67282 cri.go:89] found id: ""
	I1004 04:25:52.100083   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.100094   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:52.100106   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:52.100125   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:52.113801   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:52.113827   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:52.201284   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:52.201311   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:52.201322   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:52.280014   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:52.280047   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:52.318120   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:52.318145   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:51.048719   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:53.050304   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:54.146697   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:56.147015   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:58.148736   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:55.122546   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:57.123051   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:54.872245   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:54.886914   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:54.886990   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:54.927117   67282 cri.go:89] found id: ""
	I1004 04:25:54.927144   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.927152   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:54.927157   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:54.927205   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:54.962510   67282 cri.go:89] found id: ""
	I1004 04:25:54.962540   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.962552   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:54.962559   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:54.962619   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:54.996812   67282 cri.go:89] found id: ""
	I1004 04:25:54.996839   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.996848   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:54.996854   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:54.996905   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:55.034557   67282 cri.go:89] found id: ""
	I1004 04:25:55.034587   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.034597   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:55.034605   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:55.034667   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:55.072383   67282 cri.go:89] found id: ""
	I1004 04:25:55.072416   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.072427   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:55.072434   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:55.072494   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:55.121561   67282 cri.go:89] found id: ""
	I1004 04:25:55.121588   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.121598   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:55.121604   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:55.121775   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:55.165525   67282 cri.go:89] found id: ""
	I1004 04:25:55.165553   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.165564   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:55.165570   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:55.165627   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:55.201808   67282 cri.go:89] found id: ""
	I1004 04:25:55.201836   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.201846   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:55.201857   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:55.201870   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:55.280889   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:55.280917   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:55.280932   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:55.354979   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:55.355012   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:55.397144   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:55.397174   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:55.448710   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:55.448746   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:57.963840   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:57.977027   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:57.977085   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:58.019244   67282 cri.go:89] found id: ""
	I1004 04:25:58.019273   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.019285   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:58.019293   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:58.019351   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:58.057979   67282 cri.go:89] found id: ""
	I1004 04:25:58.058008   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.058018   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:58.058027   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:58.058084   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:58.094607   67282 cri.go:89] found id: ""
	I1004 04:25:58.094639   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.094652   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:58.094658   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:58.094726   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:58.130150   67282 cri.go:89] found id: ""
	I1004 04:25:58.130177   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.130188   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:58.130196   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:58.130259   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:58.167662   67282 cri.go:89] found id: ""
	I1004 04:25:58.167691   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.167701   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:58.167709   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:58.167769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:58.203480   67282 cri.go:89] found id: ""
	I1004 04:25:58.203568   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.203585   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:58.203594   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:58.203662   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:58.239516   67282 cri.go:89] found id: ""
	I1004 04:25:58.239537   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.239545   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:58.239551   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:58.239595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:58.275525   67282 cri.go:89] found id: ""
	I1004 04:25:58.275553   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.275564   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:58.275574   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:58.275587   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:58.331191   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:58.331224   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:58.345629   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:58.345659   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:58.416297   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:58.416315   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:58.416326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:58.490659   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:58.490694   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:55.548913   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:57.549457   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:00.647858   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.146570   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:59.623396   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.624074   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.030058   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:01.044568   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:01.044659   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:01.082652   67282 cri.go:89] found id: ""
	I1004 04:26:01.082679   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.082688   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:01.082694   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:01.082750   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:01.120781   67282 cri.go:89] found id: ""
	I1004 04:26:01.120805   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.120814   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:01.120821   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:01.120878   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:01.159494   67282 cri.go:89] found id: ""
	I1004 04:26:01.159523   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.159531   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:01.159537   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:01.159584   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:01.195482   67282 cri.go:89] found id: ""
	I1004 04:26:01.195512   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.195521   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:01.195529   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:01.195589   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:01.233971   67282 cri.go:89] found id: ""
	I1004 04:26:01.233996   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.234006   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:01.234014   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:01.234076   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:01.275935   67282 cri.go:89] found id: ""
	I1004 04:26:01.275958   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.275966   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:01.275971   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:01.276018   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:01.315512   67282 cri.go:89] found id: ""
	I1004 04:26:01.315535   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.315543   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:01.315548   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:01.315603   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:01.356465   67282 cri.go:89] found id: ""
	I1004 04:26:01.356491   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.356505   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:01.356513   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:01.356523   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:01.409237   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:01.409280   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:01.423426   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:01.423453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:01.501372   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:01.501397   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:01.501413   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:01.591087   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:01.591131   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:59.549485   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.550138   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.550258   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:05.646818   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:07.647322   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.634636   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:06.122840   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:04.152506   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:04.166847   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:04.166911   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:04.203138   67282 cri.go:89] found id: ""
	I1004 04:26:04.203167   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.203177   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:04.203184   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:04.203243   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:04.237427   67282 cri.go:89] found id: ""
	I1004 04:26:04.237453   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.237464   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:04.237471   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:04.237525   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:04.272468   67282 cri.go:89] found id: ""
	I1004 04:26:04.272499   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.272511   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:04.272518   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:04.272584   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:04.307347   67282 cri.go:89] found id: ""
	I1004 04:26:04.307373   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.307384   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:04.307390   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:04.307448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:04.342450   67282 cri.go:89] found id: ""
	I1004 04:26:04.342487   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.342498   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:04.342506   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:04.342568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:04.382846   67282 cri.go:89] found id: ""
	I1004 04:26:04.382874   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.382885   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:04.382893   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:04.382945   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:04.418234   67282 cri.go:89] found id: ""
	I1004 04:26:04.418260   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.418268   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:04.418273   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:04.418328   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:04.453433   67282 cri.go:89] found id: ""
	I1004 04:26:04.453456   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.453464   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:04.453473   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:04.453487   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:04.502093   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:04.502123   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:04.515865   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:04.515897   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:04.595672   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:04.595698   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:04.595713   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:04.675273   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:04.675304   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:07.214965   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:07.229495   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:07.229568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:07.268541   67282 cri.go:89] found id: ""
	I1004 04:26:07.268580   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.268591   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:07.268599   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:07.268662   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:07.321382   67282 cri.go:89] found id: ""
	I1004 04:26:07.321414   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.321424   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:07.321431   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:07.321490   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:07.379840   67282 cri.go:89] found id: ""
	I1004 04:26:07.379869   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.379878   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:07.379884   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:07.379928   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:07.431304   67282 cri.go:89] found id: ""
	I1004 04:26:07.431333   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.431343   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:07.431349   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:07.431407   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:07.466853   67282 cri.go:89] found id: ""
	I1004 04:26:07.466880   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.466888   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:07.466893   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:07.466951   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:07.501587   67282 cri.go:89] found id: ""
	I1004 04:26:07.501613   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.501624   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:07.501630   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:07.501685   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:07.536326   67282 cri.go:89] found id: ""
	I1004 04:26:07.536354   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.536364   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:07.536371   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:07.536426   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:07.575257   67282 cri.go:89] found id: ""
	I1004 04:26:07.575283   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.575292   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:07.575299   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:07.575310   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:07.629477   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:07.629515   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:07.643294   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:07.643326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:07.720324   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:07.720350   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:07.720365   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:07.797641   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:07.797678   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:06.049580   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:08.548786   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.146544   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:12.146842   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:08.622497   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.622759   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:12.624285   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.339392   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:10.353341   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:10.353397   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:10.391023   67282 cri.go:89] found id: ""
	I1004 04:26:10.391049   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.391059   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:10.391066   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:10.391129   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:10.424345   67282 cri.go:89] found id: ""
	I1004 04:26:10.424376   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.424388   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:10.424396   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:10.424466   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:10.459344   67282 cri.go:89] found id: ""
	I1004 04:26:10.459374   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.459387   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:10.459394   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:10.459451   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:10.494898   67282 cri.go:89] found id: ""
	I1004 04:26:10.494921   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.494929   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:10.494935   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:10.494982   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:10.531084   67282 cri.go:89] found id: ""
	I1004 04:26:10.531111   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.531122   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:10.531129   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:10.531185   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:10.566918   67282 cri.go:89] found id: ""
	I1004 04:26:10.566949   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.566960   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:10.566967   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:10.567024   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:10.604888   67282 cri.go:89] found id: ""
	I1004 04:26:10.604923   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.604935   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:10.604942   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:10.605013   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:10.641578   67282 cri.go:89] found id: ""
	I1004 04:26:10.641606   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.641620   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:10.641631   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:10.641648   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:10.696848   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:10.696882   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:10.710393   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:10.710417   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:10.780854   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:10.780881   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:10.780895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:10.861732   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:10.861771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:13.403231   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:13.417246   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:13.417319   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:13.451581   67282 cri.go:89] found id: ""
	I1004 04:26:13.451607   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.451616   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:13.451621   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:13.451681   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:13.488362   67282 cri.go:89] found id: ""
	I1004 04:26:13.488388   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.488396   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:13.488401   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:13.488449   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:10.549905   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:13.048997   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:14.646627   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:16.647879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:15.123067   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:17.622729   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:13.522697   67282 cri.go:89] found id: ""
	I1004 04:26:13.522729   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.522740   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:13.522751   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:13.522803   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:13.564926   67282 cri.go:89] found id: ""
	I1004 04:26:13.564959   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.564972   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:13.564981   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:13.565058   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:13.600582   67282 cri.go:89] found id: ""
	I1004 04:26:13.600612   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.600622   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:13.600630   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:13.600688   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:13.634550   67282 cri.go:89] found id: ""
	I1004 04:26:13.634575   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.634584   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:13.634591   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:13.634646   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:13.669281   67282 cri.go:89] found id: ""
	I1004 04:26:13.669311   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.669320   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:13.669326   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:13.669388   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:13.707664   67282 cri.go:89] found id: ""
	I1004 04:26:13.707693   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.707703   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:13.707713   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:13.707727   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:13.721127   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:13.721168   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:13.788026   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:13.788051   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:13.788067   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:13.864505   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:13.864542   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:13.902896   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:13.902921   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:16.456813   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:16.470071   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:16.470138   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:16.506085   67282 cri.go:89] found id: ""
	I1004 04:26:16.506114   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.506125   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:16.506133   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:16.506189   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:16.540016   67282 cri.go:89] found id: ""
	I1004 04:26:16.540044   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.540052   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:16.540056   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:16.540100   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:16.579247   67282 cri.go:89] found id: ""
	I1004 04:26:16.579272   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.579280   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:16.579285   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:16.579332   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:16.615552   67282 cri.go:89] found id: ""
	I1004 04:26:16.615579   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.615601   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:16.615621   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:16.615675   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:16.652639   67282 cri.go:89] found id: ""
	I1004 04:26:16.652660   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.652671   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:16.652678   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:16.652732   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:16.689607   67282 cri.go:89] found id: ""
	I1004 04:26:16.689631   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.689643   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:16.689650   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:16.689720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:16.724430   67282 cri.go:89] found id: ""
	I1004 04:26:16.724458   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.724469   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:16.724475   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:16.724534   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:16.758378   67282 cri.go:89] found id: ""
	I1004 04:26:16.758412   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.758423   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:16.758434   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:16.758454   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:16.826234   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:16.826259   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:16.826273   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:16.906908   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:16.906945   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:16.950295   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:16.950321   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:17.002216   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:17.002253   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:15.549441   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:17.549816   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.147105   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:21.147403   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.622982   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:21.624073   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.516253   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:19.529664   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:19.529726   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:19.566669   67282 cri.go:89] found id: ""
	I1004 04:26:19.566700   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.566711   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:19.566718   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:19.566772   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:19.605923   67282 cri.go:89] found id: ""
	I1004 04:26:19.605951   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.605961   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:19.605968   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:19.606025   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:19.645132   67282 cri.go:89] found id: ""
	I1004 04:26:19.645158   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.645168   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:19.645175   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:19.645235   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:19.687135   67282 cri.go:89] found id: ""
	I1004 04:26:19.687160   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.687171   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:19.687178   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:19.687256   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:19.724180   67282 cri.go:89] found id: ""
	I1004 04:26:19.724213   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.724224   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:19.724230   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:19.724295   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:19.761608   67282 cri.go:89] found id: ""
	I1004 04:26:19.761638   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.761649   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:19.761656   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:19.761714   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:19.795060   67282 cri.go:89] found id: ""
	I1004 04:26:19.795089   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.795099   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:19.795106   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:19.795164   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:19.835678   67282 cri.go:89] found id: ""
	I1004 04:26:19.835703   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.835712   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:19.835722   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:19.835736   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:19.889508   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:19.889543   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:19.903206   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:19.903233   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:19.973445   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:19.973471   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:19.973485   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:20.053996   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:20.054034   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:22.594171   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:22.609084   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:22.609145   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:22.650423   67282 cri.go:89] found id: ""
	I1004 04:26:22.650449   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.650459   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:22.650466   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:22.650525   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:22.686420   67282 cri.go:89] found id: ""
	I1004 04:26:22.686450   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.686461   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:22.686469   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:22.686535   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:22.721385   67282 cri.go:89] found id: ""
	I1004 04:26:22.721408   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.721416   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:22.721421   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:22.721484   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:22.765461   67282 cri.go:89] found id: ""
	I1004 04:26:22.765492   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.765504   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:22.765511   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:22.765569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:22.798192   67282 cri.go:89] found id: ""
	I1004 04:26:22.798220   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.798230   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:22.798235   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:22.798293   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:22.833110   67282 cri.go:89] found id: ""
	I1004 04:26:22.833138   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.833147   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:22.833153   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:22.833212   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:22.875653   67282 cri.go:89] found id: ""
	I1004 04:26:22.875684   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.875696   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:22.875704   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:22.875766   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:22.913906   67282 cri.go:89] found id: ""
	I1004 04:26:22.913931   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.913938   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:22.913946   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:22.913957   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:22.969480   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:22.969511   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:22.983475   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:22.983500   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:23.059953   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:23.059982   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:23.059996   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:23.139106   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:23.139134   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:19.550307   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:22.048618   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:23.647507   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:26.147135   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:24.122370   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:26.122976   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:25.678489   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:25.692648   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:25.692705   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:25.728232   67282 cri.go:89] found id: ""
	I1004 04:26:25.728261   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.728269   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:25.728276   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:25.728335   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:25.763956   67282 cri.go:89] found id: ""
	I1004 04:26:25.763982   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.763991   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:25.763998   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:25.764057   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:25.799715   67282 cri.go:89] found id: ""
	I1004 04:26:25.799743   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.799753   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:25.799761   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:25.799840   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:25.834823   67282 cri.go:89] found id: ""
	I1004 04:26:25.834855   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.834866   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:25.834873   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:25.834933   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:25.869194   67282 cri.go:89] found id: ""
	I1004 04:26:25.869224   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.869235   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:25.869242   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:25.869303   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:25.903514   67282 cri.go:89] found id: ""
	I1004 04:26:25.903543   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.903553   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:25.903558   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:25.903606   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:25.939887   67282 cri.go:89] found id: ""
	I1004 04:26:25.939919   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.939930   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:25.939938   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:25.939996   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:25.981922   67282 cri.go:89] found id: ""
	I1004 04:26:25.981944   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.981952   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:25.981960   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:25.981971   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:26.064860   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:26.064891   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:26.105272   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:26.105296   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:26.162602   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:26.162640   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:26.176408   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:26.176439   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:26.242264   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:24.549364   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:27.049470   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.646788   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.146205   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:33.146879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.622691   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.122181   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:33.123226   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.742417   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:28.755655   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:28.755723   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:28.789338   67282 cri.go:89] found id: ""
	I1004 04:26:28.789361   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.789369   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:28.789374   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:28.789420   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:28.823513   67282 cri.go:89] found id: ""
	I1004 04:26:28.823544   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.823555   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:28.823562   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:28.823619   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:28.858826   67282 cri.go:89] found id: ""
	I1004 04:26:28.858854   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.858866   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:28.858873   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:28.858927   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:28.892552   67282 cri.go:89] found id: ""
	I1004 04:26:28.892579   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.892587   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:28.892593   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:28.892639   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:28.929250   67282 cri.go:89] found id: ""
	I1004 04:26:28.929277   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.929284   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:28.929289   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:28.929335   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:28.966554   67282 cri.go:89] found id: ""
	I1004 04:26:28.966581   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.966589   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:28.966594   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:28.966642   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:28.999930   67282 cri.go:89] found id: ""
	I1004 04:26:28.999954   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.999964   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:28.999970   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:29.000025   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:29.033687   67282 cri.go:89] found id: ""
	I1004 04:26:29.033717   67282 logs.go:282] 0 containers: []
	W1004 04:26:29.033727   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:29.033737   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:29.033752   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:29.109486   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:29.109523   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:29.149125   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:29.149152   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:29.197830   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:29.197861   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:29.211182   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:29.211204   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:29.276808   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:31.777659   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:31.791374   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:31.791425   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:31.825453   67282 cri.go:89] found id: ""
	I1004 04:26:31.825480   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.825489   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:31.825495   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:31.825553   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:31.857845   67282 cri.go:89] found id: ""
	I1004 04:26:31.857875   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.857884   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:31.857893   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:31.857949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:31.892282   67282 cri.go:89] found id: ""
	I1004 04:26:31.892309   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.892317   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:31.892322   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:31.892366   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:31.926016   67282 cri.go:89] found id: ""
	I1004 04:26:31.926037   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.926045   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:31.926051   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:31.926094   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:31.961382   67282 cri.go:89] found id: ""
	I1004 04:26:31.961415   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.961425   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:31.961433   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:31.961492   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:31.994570   67282 cri.go:89] found id: ""
	I1004 04:26:31.994602   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.994613   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:31.994620   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:31.994675   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:32.027359   67282 cri.go:89] found id: ""
	I1004 04:26:32.027383   67282 logs.go:282] 0 containers: []
	W1004 04:26:32.027391   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:32.027397   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:32.027448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:32.063518   67282 cri.go:89] found id: ""
	I1004 04:26:32.063545   67282 logs.go:282] 0 containers: []
	W1004 04:26:32.063555   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:32.063565   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:32.063577   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:32.151555   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:32.151582   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:32.190678   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:32.190700   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:32.243567   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:32.243596   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:32.256293   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:32.256320   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:32.329513   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:29.548687   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.550184   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:34.050659   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:35.147870   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:37.646571   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:35.623302   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:38.122555   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:34.830126   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:34.844760   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:34.844833   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:34.878409   67282 cri.go:89] found id: ""
	I1004 04:26:34.878433   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.878440   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:34.878445   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:34.878500   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:34.916493   67282 cri.go:89] found id: ""
	I1004 04:26:34.916516   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.916524   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:34.916532   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:34.916577   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:34.954532   67282 cri.go:89] found id: ""
	I1004 04:26:34.954556   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.954565   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:34.954570   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:34.954616   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:34.987163   67282 cri.go:89] found id: ""
	I1004 04:26:34.987190   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.987198   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:34.987205   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:34.987261   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:35.021351   67282 cri.go:89] found id: ""
	I1004 04:26:35.021379   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.021388   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:35.021394   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:35.021452   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:35.056350   67282 cri.go:89] found id: ""
	I1004 04:26:35.056376   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.056384   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:35.056390   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:35.056448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:35.093375   67282 cri.go:89] found id: ""
	I1004 04:26:35.093402   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.093412   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:35.093420   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:35.093486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:35.130509   67282 cri.go:89] found id: ""
	I1004 04:26:35.130532   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.130541   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:35.130549   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:35.130562   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:35.188138   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:35.188174   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:35.202226   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:35.202267   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:35.276652   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:35.276675   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:35.276688   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:35.357339   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:35.357373   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:37.898166   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:37.911319   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:37.911387   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:37.944551   67282 cri.go:89] found id: ""
	I1004 04:26:37.944578   67282 logs.go:282] 0 containers: []
	W1004 04:26:37.944590   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:37.944597   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:37.944652   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:37.978066   67282 cri.go:89] found id: ""
	I1004 04:26:37.978093   67282 logs.go:282] 0 containers: []
	W1004 04:26:37.978101   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:37.978107   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:37.978163   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:38.011065   67282 cri.go:89] found id: ""
	I1004 04:26:38.011095   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.011104   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:38.011109   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:38.011156   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:38.050323   67282 cri.go:89] found id: ""
	I1004 04:26:38.050349   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.050359   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:38.050366   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:38.050425   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:38.089141   67282 cri.go:89] found id: ""
	I1004 04:26:38.089169   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.089177   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:38.089182   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:38.089258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:38.122625   67282 cri.go:89] found id: ""
	I1004 04:26:38.122653   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.122663   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:38.122671   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:38.122719   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:38.159957   67282 cri.go:89] found id: ""
	I1004 04:26:38.159982   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.159990   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:38.159996   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:38.160085   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:38.194592   67282 cri.go:89] found id: ""
	I1004 04:26:38.194618   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.194626   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:38.194646   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:38.194657   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:38.263914   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:38.263945   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:38.263958   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:38.339864   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:38.339895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:38.375477   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:38.375505   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:38.428292   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:38.428320   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:36.050815   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:38.548602   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:39.646794   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.146914   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:40.123280   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.623659   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:40.941910   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:40.955041   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:40.955117   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:40.991278   67282 cri.go:89] found id: ""
	I1004 04:26:40.991307   67282 logs.go:282] 0 containers: []
	W1004 04:26:40.991317   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:40.991325   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:40.991389   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:41.025347   67282 cri.go:89] found id: ""
	I1004 04:26:41.025373   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.025385   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:41.025392   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:41.025450   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:41.060974   67282 cri.go:89] found id: ""
	I1004 04:26:41.061001   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.061019   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:41.061026   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:41.061087   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:41.097557   67282 cri.go:89] found id: ""
	I1004 04:26:41.097587   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.097598   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:41.097605   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:41.097665   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:41.136371   67282 cri.go:89] found id: ""
	I1004 04:26:41.136396   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.136405   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:41.136412   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:41.136472   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:41.172590   67282 cri.go:89] found id: ""
	I1004 04:26:41.172617   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.172627   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:41.172634   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:41.172687   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:41.209124   67282 cri.go:89] found id: ""
	I1004 04:26:41.209146   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.209154   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:41.209159   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:41.209214   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:41.250654   67282 cri.go:89] found id: ""
	I1004 04:26:41.250687   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.250699   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:41.250709   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:41.250723   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:41.305814   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:41.305864   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:41.322961   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:41.322989   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:41.427611   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:41.427632   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:41.427648   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:41.505830   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:41.505877   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:40.549691   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.549838   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:44.647149   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.146894   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:45.122344   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.122706   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:44.050902   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:44.065277   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:44.065343   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:44.101089   67282 cri.go:89] found id: ""
	I1004 04:26:44.101110   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.101117   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:44.101123   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:44.101174   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:44.138570   67282 cri.go:89] found id: ""
	I1004 04:26:44.138593   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.138601   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:44.138606   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:44.138650   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:44.178423   67282 cri.go:89] found id: ""
	I1004 04:26:44.178456   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.178478   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:44.178486   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:44.178556   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:44.213301   67282 cri.go:89] found id: ""
	I1004 04:26:44.213330   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.213338   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:44.213344   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:44.213401   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:44.247653   67282 cri.go:89] found id: ""
	I1004 04:26:44.247681   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.247688   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:44.247694   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:44.247756   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:44.281667   67282 cri.go:89] found id: ""
	I1004 04:26:44.281693   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.281704   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:44.281711   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:44.281767   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:44.314637   67282 cri.go:89] found id: ""
	I1004 04:26:44.314667   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.314677   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:44.314684   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:44.314760   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:44.349432   67282 cri.go:89] found id: ""
	I1004 04:26:44.349459   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.349469   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:44.349479   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:44.349492   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:44.397134   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:44.397168   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:44.410708   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:44.410738   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:44.482025   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:44.482049   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:44.482065   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:44.562652   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:44.562699   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:47.101459   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:47.116923   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:47.117020   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:47.153495   67282 cri.go:89] found id: ""
	I1004 04:26:47.153524   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.153534   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:47.153541   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:47.153601   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:47.189976   67282 cri.go:89] found id: ""
	I1004 04:26:47.190004   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.190014   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:47.190023   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:47.190084   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:47.225712   67282 cri.go:89] found id: ""
	I1004 04:26:47.225740   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.225748   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:47.225754   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:47.225800   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:47.261565   67282 cri.go:89] found id: ""
	I1004 04:26:47.261593   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.261603   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:47.261608   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:47.261665   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:47.298152   67282 cri.go:89] found id: ""
	I1004 04:26:47.298204   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.298214   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:47.298223   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:47.298279   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:47.338226   67282 cri.go:89] found id: ""
	I1004 04:26:47.338253   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.338261   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:47.338267   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:47.338320   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:47.378859   67282 cri.go:89] found id: ""
	I1004 04:26:47.378892   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.378902   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:47.378909   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:47.378964   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:47.418161   67282 cri.go:89] found id: ""
	I1004 04:26:47.418186   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.418194   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:47.418203   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:47.418213   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:47.470271   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:47.470311   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:47.484416   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:47.484453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:47.556744   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:47.556767   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:47.556778   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:47.634266   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:47.634299   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:45.050501   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.550072   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:49.147562   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:51.648504   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:49.623375   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:52.122346   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:50.175746   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:50.191850   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:50.191945   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:50.229542   67282 cri.go:89] found id: ""
	I1004 04:26:50.229574   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.229584   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:50.229593   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:50.229655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:50.268401   67282 cri.go:89] found id: ""
	I1004 04:26:50.268432   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.268441   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:50.268449   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:50.268522   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:50.302927   67282 cri.go:89] found id: ""
	I1004 04:26:50.302954   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.302964   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:50.302969   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:50.303029   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:50.336617   67282 cri.go:89] found id: ""
	I1004 04:26:50.336646   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.336656   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:50.336663   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:50.336724   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:50.372871   67282 cri.go:89] found id: ""
	I1004 04:26:50.372901   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.372911   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:50.372918   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:50.372977   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:50.409601   67282 cri.go:89] found id: ""
	I1004 04:26:50.409629   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.409640   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:50.409648   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:50.409723   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:50.451899   67282 cri.go:89] found id: ""
	I1004 04:26:50.451927   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.451935   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:50.451940   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:50.451991   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:50.487306   67282 cri.go:89] found id: ""
	I1004 04:26:50.487332   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.487343   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:50.487353   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:50.487369   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:50.565167   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:50.565192   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:50.565207   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:50.646155   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:50.646194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:50.688459   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:50.688489   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:50.742416   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:50.742460   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:53.257063   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:53.270546   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:53.270618   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:53.306504   67282 cri.go:89] found id: ""
	I1004 04:26:53.306530   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.306538   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:53.306544   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:53.306594   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:53.343256   67282 cri.go:89] found id: ""
	I1004 04:26:53.343285   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.343293   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:53.343299   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:53.343352   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:53.380834   67282 cri.go:89] found id: ""
	I1004 04:26:53.380864   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.380873   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:53.380880   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:53.380940   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:53.417361   67282 cri.go:89] found id: ""
	I1004 04:26:53.417391   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.417404   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:53.417415   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:53.417479   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:53.451948   67282 cri.go:89] found id: ""
	I1004 04:26:53.451970   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.451978   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:53.451983   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:53.452039   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:53.487731   67282 cri.go:89] found id: ""
	I1004 04:26:53.487756   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.487764   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:53.487769   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:53.487836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:50.049952   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:52.050275   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:54.151420   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.647593   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:54.122386   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.623398   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:53.531549   67282 cri.go:89] found id: ""
	I1004 04:26:53.531573   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.531582   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:53.531587   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:53.531643   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:53.578123   67282 cri.go:89] found id: ""
	I1004 04:26:53.578151   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.578162   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:53.578180   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:53.578195   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:53.643062   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:53.643093   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:53.696157   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:53.696194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:53.709884   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:53.709910   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:53.791272   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:53.791297   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:53.791314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:56.371608   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:56.386293   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:56.386376   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:56.425531   67282 cri.go:89] found id: ""
	I1004 04:26:56.425560   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.425571   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:56.425578   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:56.425646   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:56.470293   67282 cri.go:89] found id: ""
	I1004 04:26:56.470326   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.470335   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:56.470340   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:56.470400   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:56.508927   67282 cri.go:89] found id: ""
	I1004 04:26:56.508955   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.508963   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:56.508968   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:56.509018   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:56.549149   67282 cri.go:89] found id: ""
	I1004 04:26:56.549178   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.549191   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:56.549199   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:56.549270   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:56.589412   67282 cri.go:89] found id: ""
	I1004 04:26:56.589441   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.589451   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:56.589459   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:56.589517   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:56.624732   67282 cri.go:89] found id: ""
	I1004 04:26:56.624760   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.624770   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:56.624776   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:56.624838   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:56.662385   67282 cri.go:89] found id: ""
	I1004 04:26:56.662413   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.662421   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:56.662427   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:56.662483   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:56.697982   67282 cri.go:89] found id: ""
	I1004 04:26:56.698014   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.698025   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:56.698036   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:56.698049   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:56.750597   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:56.750633   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:56.764884   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:56.764921   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:56.844404   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:56.844433   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:56.844451   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:56.924373   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:56.924406   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:54.548706   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.549763   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.049294   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:58.648470   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:01.146948   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.148357   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.123321   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:01.622391   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.466449   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:59.481897   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:59.481972   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:59.535384   67282 cri.go:89] found id: ""
	I1004 04:26:59.535411   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.535422   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:59.535428   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:59.535486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:59.595843   67282 cri.go:89] found id: ""
	I1004 04:26:59.595875   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.595886   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:59.595894   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:59.595954   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:59.641010   67282 cri.go:89] found id: ""
	I1004 04:26:59.641041   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.641049   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:59.641057   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:59.641102   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:59.679705   67282 cri.go:89] found id: ""
	I1004 04:26:59.679736   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.679746   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:59.679753   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:59.679828   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:59.715960   67282 cri.go:89] found id: ""
	I1004 04:26:59.715985   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.715993   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:59.715998   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:59.716047   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:59.757406   67282 cri.go:89] found id: ""
	I1004 04:26:59.757442   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.757453   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:59.757461   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:59.757528   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:59.792038   67282 cri.go:89] found id: ""
	I1004 04:26:59.792066   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.792076   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:59.792083   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:59.792141   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:59.830258   67282 cri.go:89] found id: ""
	I1004 04:26:59.830281   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.830289   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:59.830296   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:59.830308   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:59.877273   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:59.877304   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:59.932570   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:59.932610   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:59.945896   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:59.945919   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:00.020363   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:00.020392   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:00.020412   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:02.601022   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:02.615039   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:02.615112   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:02.654541   67282 cri.go:89] found id: ""
	I1004 04:27:02.654567   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.654574   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:02.654579   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:02.654638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:02.691313   67282 cri.go:89] found id: ""
	I1004 04:27:02.691338   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.691349   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:02.691355   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:02.691414   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:02.735337   67282 cri.go:89] found id: ""
	I1004 04:27:02.735367   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.735376   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:02.735383   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:02.735486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:02.769604   67282 cri.go:89] found id: ""
	I1004 04:27:02.769628   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.769638   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:02.769643   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:02.769704   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:02.812913   67282 cri.go:89] found id: ""
	I1004 04:27:02.812938   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.812949   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:02.812954   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:02.813020   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:02.849910   67282 cri.go:89] found id: ""
	I1004 04:27:02.849939   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.849949   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:02.849956   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:02.850023   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:02.889467   67282 cri.go:89] found id: ""
	I1004 04:27:02.889497   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.889509   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:02.889517   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:02.889575   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:02.928508   67282 cri.go:89] found id: ""
	I1004 04:27:02.928529   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.928537   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:02.928545   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:02.928556   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:02.942783   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:02.942821   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:03.018282   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:03.018304   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:03.018314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:03.101588   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:03.101622   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:03.149911   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:03.149937   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:01.051581   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.550066   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.646200   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:07.648479   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.622932   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.623005   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.121151   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.703125   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:05.717243   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:05.717303   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:05.752564   67282 cri.go:89] found id: ""
	I1004 04:27:05.752588   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.752597   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:05.752609   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:05.752656   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:05.786955   67282 cri.go:89] found id: ""
	I1004 04:27:05.786983   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.786994   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:05.787001   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:05.787073   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:05.823848   67282 cri.go:89] found id: ""
	I1004 04:27:05.823882   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.823893   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:05.823901   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:05.823970   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:05.866192   67282 cri.go:89] found id: ""
	I1004 04:27:05.866220   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.866238   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:05.866246   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:05.866305   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:05.904051   67282 cri.go:89] found id: ""
	I1004 04:27:05.904078   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.904089   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:05.904096   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:05.904154   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:05.940041   67282 cri.go:89] found id: ""
	I1004 04:27:05.940075   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.940085   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:05.940092   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:05.940158   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:05.975758   67282 cri.go:89] found id: ""
	I1004 04:27:05.975799   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.975810   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:05.975818   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:05.975892   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:06.011044   67282 cri.go:89] found id: ""
	I1004 04:27:06.011086   67282 logs.go:282] 0 containers: []
	W1004 04:27:06.011096   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:06.011105   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:06.011116   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:06.024900   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:06.024937   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:06.109932   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:06.109960   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:06.109976   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:06.189517   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:06.189557   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:06.230019   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:06.230048   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:06.050004   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.548768   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:10.147814   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.646430   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:10.122097   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.123967   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.785355   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:08.799156   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:08.799218   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:08.843606   67282 cri.go:89] found id: ""
	I1004 04:27:08.843634   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.843643   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:08.843648   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:08.843698   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:08.884418   67282 cri.go:89] found id: ""
	I1004 04:27:08.884443   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.884450   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:08.884456   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:08.884503   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:08.925878   67282 cri.go:89] found id: ""
	I1004 04:27:08.925906   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.925914   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:08.925920   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:08.925970   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:08.966127   67282 cri.go:89] found id: ""
	I1004 04:27:08.966157   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.966167   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:08.966173   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:08.966227   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:09.010646   67282 cri.go:89] found id: ""
	I1004 04:27:09.010672   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.010682   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:09.010702   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:09.010769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:09.049738   67282 cri.go:89] found id: ""
	I1004 04:27:09.049761   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.049768   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:09.049774   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:09.049825   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:09.082709   67282 cri.go:89] found id: ""
	I1004 04:27:09.082739   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.082747   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:09.082752   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:09.082808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:09.120574   67282 cri.go:89] found id: ""
	I1004 04:27:09.120605   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.120617   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:09.120626   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:09.120636   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:09.202880   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:09.202922   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:09.242668   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:09.242700   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:09.298662   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:09.298703   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:09.314832   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:09.314868   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:09.389062   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:11.889645   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:11.902953   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:11.903012   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:11.939846   67282 cri.go:89] found id: ""
	I1004 04:27:11.939874   67282 logs.go:282] 0 containers: []
	W1004 04:27:11.939882   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:11.939888   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:11.939936   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:11.975281   67282 cri.go:89] found id: ""
	I1004 04:27:11.975303   67282 logs.go:282] 0 containers: []
	W1004 04:27:11.975311   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:11.975317   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:11.975370   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:12.011400   67282 cri.go:89] found id: ""
	I1004 04:27:12.011428   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.011438   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:12.011443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:12.011506   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:12.046862   67282 cri.go:89] found id: ""
	I1004 04:27:12.046889   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.046898   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:12.046905   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:12.046960   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:12.081537   67282 cri.go:89] found id: ""
	I1004 04:27:12.081569   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.081581   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:12.081590   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:12.081655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:12.121982   67282 cri.go:89] found id: ""
	I1004 04:27:12.122010   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.122021   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:12.122028   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:12.122086   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:12.161419   67282 cri.go:89] found id: ""
	I1004 04:27:12.161460   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.161473   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:12.161481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:12.161549   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:12.202188   67282 cri.go:89] found id: ""
	I1004 04:27:12.202230   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.202242   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:12.202253   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:12.202267   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:12.253424   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:12.253462   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:12.268116   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:12.268141   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:12.337788   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:12.337814   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:12.337826   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:12.417359   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:12.417395   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:10.549097   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.549239   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.647267   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:17.147526   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.623050   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:16.623702   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.959596   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:14.973031   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:14.973090   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:15.011451   67282 cri.go:89] found id: ""
	I1004 04:27:15.011487   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.011497   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:15.011513   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:15.011572   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:15.055767   67282 cri.go:89] found id: ""
	I1004 04:27:15.055817   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.055829   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:15.055836   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:15.055915   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:15.096357   67282 cri.go:89] found id: ""
	I1004 04:27:15.096385   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.096394   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:15.096399   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:15.096456   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:15.131824   67282 cri.go:89] found id: ""
	I1004 04:27:15.131853   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.131863   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:15.131870   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:15.131932   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:15.169250   67282 cri.go:89] found id: ""
	I1004 04:27:15.169285   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.169299   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:15.169307   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:15.169373   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:15.206852   67282 cri.go:89] found id: ""
	I1004 04:27:15.206881   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.206889   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:15.206895   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:15.206949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:15.241392   67282 cri.go:89] found id: ""
	I1004 04:27:15.241421   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.241431   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:15.241439   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:15.241498   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:15.280697   67282 cri.go:89] found id: ""
	I1004 04:27:15.280723   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.280734   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:15.280744   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:15.280758   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:15.361681   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:15.361716   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:15.404640   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:15.404676   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:15.457287   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:15.457326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:15.471162   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:15.471188   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:15.544157   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:18.045094   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:18.060228   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:18.060310   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:18.096659   67282 cri.go:89] found id: ""
	I1004 04:27:18.096688   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.096697   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:18.096703   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:18.096757   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:18.135538   67282 cri.go:89] found id: ""
	I1004 04:27:18.135565   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.135573   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:18.135579   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:18.135629   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:18.171051   67282 cri.go:89] found id: ""
	I1004 04:27:18.171082   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.171098   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:18.171106   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:18.171168   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:18.205696   67282 cri.go:89] found id: ""
	I1004 04:27:18.205725   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.205735   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:18.205742   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:18.205803   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:18.240545   67282 cri.go:89] found id: ""
	I1004 04:27:18.240566   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.240576   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:18.240584   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:18.240638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:18.279185   67282 cri.go:89] found id: ""
	I1004 04:27:18.279221   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.279232   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:18.279239   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:18.279310   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:18.318395   67282 cri.go:89] found id: ""
	I1004 04:27:18.318417   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.318424   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:18.318430   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:18.318476   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:18.352367   67282 cri.go:89] found id: ""
	I1004 04:27:18.352390   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.352398   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:18.352407   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:18.352420   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:18.365604   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:18.365637   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:18.438407   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:18.438427   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:18.438438   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:14.549690   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:16.550244   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:18.550355   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:19.647031   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:22.147826   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:19.126090   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:21.623910   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:18.513645   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:18.513679   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:18.557224   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:18.557250   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:21.111005   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:21.126573   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:21.126631   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:21.161161   67282 cri.go:89] found id: ""
	I1004 04:27:21.161190   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.161201   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:21.161207   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:21.161258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:21.199517   67282 cri.go:89] found id: ""
	I1004 04:27:21.199544   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.199555   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:21.199562   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:21.199625   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:21.236210   67282 cri.go:89] found id: ""
	I1004 04:27:21.236238   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.236246   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:21.236251   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:21.236311   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:21.272720   67282 cri.go:89] found id: ""
	I1004 04:27:21.272746   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.272753   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:21.272759   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:21.272808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:21.311439   67282 cri.go:89] found id: ""
	I1004 04:27:21.311474   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.311484   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:21.311491   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:21.311551   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:21.360400   67282 cri.go:89] found id: ""
	I1004 04:27:21.360427   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.360436   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:21.360443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:21.360511   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:21.394627   67282 cri.go:89] found id: ""
	I1004 04:27:21.394656   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.394667   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:21.394673   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:21.394721   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:21.429736   67282 cri.go:89] found id: ""
	I1004 04:27:21.429762   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.429770   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:21.429778   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:21.429789   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:21.482773   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:21.482808   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:21.497570   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:21.497595   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:21.582335   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:21.582355   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:21.582367   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:21.662196   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:21.662230   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:21.050000   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:23.050516   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.647074   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:27.147999   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.123142   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:26.624049   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.205743   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:24.222878   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:24.222951   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:24.263410   67282 cri.go:89] found id: ""
	I1004 04:27:24.263450   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.263462   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:24.263469   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:24.263532   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:24.306892   67282 cri.go:89] found id: ""
	I1004 04:27:24.306923   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.306934   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:24.306941   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:24.307008   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:24.345522   67282 cri.go:89] found id: ""
	I1004 04:27:24.345559   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.345571   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:24.345579   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:24.345638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:24.384893   67282 cri.go:89] found id: ""
	I1004 04:27:24.384918   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.384925   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:24.384931   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:24.384978   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:24.420998   67282 cri.go:89] found id: ""
	I1004 04:27:24.421025   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.421036   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:24.421043   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:24.421105   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:24.456277   67282 cri.go:89] found id: ""
	I1004 04:27:24.456305   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.456315   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:24.456322   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:24.456383   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:24.497852   67282 cri.go:89] found id: ""
	I1004 04:27:24.497881   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.497892   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:24.497900   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:24.497960   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:24.538702   67282 cri.go:89] found id: ""
	I1004 04:27:24.538736   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.538755   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:24.538766   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:24.538778   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:24.553747   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:24.553773   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:24.638059   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:24.638081   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:24.638093   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:24.718165   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:24.718212   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:24.759770   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:24.759811   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:27.311684   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:27.327493   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:27.327570   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:27.362804   67282 cri.go:89] found id: ""
	I1004 04:27:27.362827   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.362836   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:27.362841   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:27.362888   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:27.401576   67282 cri.go:89] found id: ""
	I1004 04:27:27.401604   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.401614   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:27.401621   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:27.401682   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:27.445152   67282 cri.go:89] found id: ""
	I1004 04:27:27.445177   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.445187   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:27.445193   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:27.445240   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:27.482710   67282 cri.go:89] found id: ""
	I1004 04:27:27.482734   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.482742   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:27.482749   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:27.482808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:27.519459   67282 cri.go:89] found id: ""
	I1004 04:27:27.519488   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.519498   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:27.519505   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:27.519569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:27.559381   67282 cri.go:89] found id: ""
	I1004 04:27:27.559407   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.559417   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:27.559423   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:27.559468   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:27.609040   67282 cri.go:89] found id: ""
	I1004 04:27:27.609068   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.609076   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:27.609081   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:27.609128   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:27.654537   67282 cri.go:89] found id: ""
	I1004 04:27:27.654569   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.654579   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:27.654590   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:27.654603   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:27.709062   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:27.709098   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:27.722931   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:27.722955   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:27.796863   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:27.796884   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:27.796895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:27.879840   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:27.879876   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:25.549643   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:27.551373   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:29.646879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:31.646956   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:29.122087   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:31.122774   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:30.423644   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:30.439256   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:30.439311   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:30.479612   67282 cri.go:89] found id: ""
	I1004 04:27:30.479640   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.479648   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:30.479654   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:30.479750   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:30.522846   67282 cri.go:89] found id: ""
	I1004 04:27:30.522879   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.522890   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:30.522898   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:30.522946   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:30.558935   67282 cri.go:89] found id: ""
	I1004 04:27:30.558962   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.558971   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:30.558976   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:30.559032   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:30.603383   67282 cri.go:89] found id: ""
	I1004 04:27:30.603411   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.603421   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:30.603428   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:30.603492   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:30.644700   67282 cri.go:89] found id: ""
	I1004 04:27:30.644727   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.644737   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:30.644744   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:30.644799   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:30.680328   67282 cri.go:89] found id: ""
	I1004 04:27:30.680358   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.680367   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:30.680372   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:30.680419   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:30.717973   67282 cri.go:89] found id: ""
	I1004 04:27:30.717995   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.718005   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:30.718021   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:30.718082   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:30.755838   67282 cri.go:89] found id: ""
	I1004 04:27:30.755866   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.755874   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:30.755882   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:30.755893   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:30.809999   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:30.810036   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:30.824447   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:30.824491   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:30.902008   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:30.902030   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:30.902043   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:30.986938   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:30.986984   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:30.049983   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:32.050033   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:34.050671   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.647707   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:36.146619   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.624575   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:36.122046   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.531108   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:33.546681   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:33.546759   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:33.586444   67282 cri.go:89] found id: ""
	I1004 04:27:33.586469   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.586479   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:33.586486   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:33.586552   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:33.629340   67282 cri.go:89] found id: ""
	I1004 04:27:33.629365   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.629373   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:33.629378   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:33.629429   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:33.668446   67282 cri.go:89] found id: ""
	I1004 04:27:33.668473   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.668483   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:33.668490   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:33.668548   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:33.706287   67282 cri.go:89] found id: ""
	I1004 04:27:33.706312   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.706320   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:33.706327   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:33.706385   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:33.746161   67282 cri.go:89] found id: ""
	I1004 04:27:33.746189   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.746200   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:33.746207   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:33.746270   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:33.782157   67282 cri.go:89] found id: ""
	I1004 04:27:33.782184   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.782194   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:33.782200   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:33.782262   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:33.820332   67282 cri.go:89] found id: ""
	I1004 04:27:33.820361   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.820371   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:33.820378   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:33.820437   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:33.859431   67282 cri.go:89] found id: ""
	I1004 04:27:33.859458   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.859467   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:33.859475   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:33.859485   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:33.910259   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:33.910292   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:33.925149   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:33.925177   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:34.006153   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:34.006187   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:34.006202   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:34.115882   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:34.115916   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:36.662964   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:36.677071   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:36.677139   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:36.720785   67282 cri.go:89] found id: ""
	I1004 04:27:36.720807   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.720818   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:36.720826   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:36.720875   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:36.757535   67282 cri.go:89] found id: ""
	I1004 04:27:36.757563   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.757574   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:36.757582   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:36.757630   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:36.800989   67282 cri.go:89] found id: ""
	I1004 04:27:36.801024   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.801038   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:36.801046   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:36.801112   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:36.837101   67282 cri.go:89] found id: ""
	I1004 04:27:36.837122   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.837131   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:36.837136   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:36.837181   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:36.876325   67282 cri.go:89] found id: ""
	I1004 04:27:36.876358   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.876370   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:36.876379   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:36.876444   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:36.914720   67282 cri.go:89] found id: ""
	I1004 04:27:36.914749   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.914759   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:36.914767   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:36.914828   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:36.949672   67282 cri.go:89] found id: ""
	I1004 04:27:36.949694   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.949701   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:36.949706   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:36.949754   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:36.983374   67282 cri.go:89] found id: ""
	I1004 04:27:36.983406   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.983416   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:36.983427   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:36.983440   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:37.039040   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:37.039075   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:37.054873   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:37.054898   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:37.131537   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:37.131562   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:37.131577   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:37.213958   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:37.213990   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:36.548751   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:39.049804   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:38.646028   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:40.646213   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:42.648505   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:38.623560   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:40.623721   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:43.122033   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:39.754264   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:39.771465   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:39.771545   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:39.829530   67282 cri.go:89] found id: ""
	I1004 04:27:39.829560   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.829572   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:39.829580   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:39.829639   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:39.876055   67282 cri.go:89] found id: ""
	I1004 04:27:39.876078   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.876090   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:39.876095   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:39.876142   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:39.913304   67282 cri.go:89] found id: ""
	I1004 04:27:39.913327   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.913335   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:39.913340   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:39.913389   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:39.948821   67282 cri.go:89] found id: ""
	I1004 04:27:39.948847   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.948855   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:39.948862   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:39.948916   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:39.986994   67282 cri.go:89] found id: ""
	I1004 04:27:39.987023   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.987034   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:39.987041   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:39.987141   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:40.026627   67282 cri.go:89] found id: ""
	I1004 04:27:40.026656   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.026668   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:40.026675   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:40.026734   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:40.067028   67282 cri.go:89] found id: ""
	I1004 04:27:40.067068   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.067079   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:40.067086   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:40.067144   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:40.105638   67282 cri.go:89] found id: ""
	I1004 04:27:40.105667   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.105677   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:40.105694   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:40.105707   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:40.159425   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:40.159467   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:40.175045   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:40.175073   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:40.261967   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:40.261989   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:40.262002   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:40.345317   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:40.345354   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:42.888115   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:42.901889   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:42.901948   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:42.938556   67282 cri.go:89] found id: ""
	I1004 04:27:42.938587   67282 logs.go:282] 0 containers: []
	W1004 04:27:42.938597   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:42.938604   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:42.938668   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:42.974569   67282 cri.go:89] found id: ""
	I1004 04:27:42.974595   67282 logs.go:282] 0 containers: []
	W1004 04:27:42.974606   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:42.974613   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:42.974679   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:43.010552   67282 cri.go:89] found id: ""
	I1004 04:27:43.010581   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.010593   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:43.010600   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:43.010655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:43.046204   67282 cri.go:89] found id: ""
	I1004 04:27:43.046237   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.046247   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:43.046254   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:43.046313   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:43.081612   67282 cri.go:89] found id: ""
	I1004 04:27:43.081644   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.081655   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:43.081662   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:43.081729   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:43.121103   67282 cri.go:89] found id: ""
	I1004 04:27:43.121126   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.121133   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:43.121139   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:43.121191   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:43.157104   67282 cri.go:89] found id: ""
	I1004 04:27:43.157128   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.157136   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:43.157141   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:43.157196   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:43.198927   67282 cri.go:89] found id: ""
	I1004 04:27:43.198951   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.198958   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:43.198966   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:43.198975   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:43.254534   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:43.254563   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:43.268106   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:43.268130   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:43.344382   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:43.344410   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:43.344425   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:43.426916   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:43.426948   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:41.549364   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:43.549590   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.146452   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:47.148300   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.126135   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:47.622568   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.966806   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:45.980187   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:45.980252   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:46.014196   67282 cri.go:89] found id: ""
	I1004 04:27:46.014220   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.014228   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:46.014233   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:46.014295   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:46.053910   67282 cri.go:89] found id: ""
	I1004 04:27:46.053940   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.053951   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:46.053957   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:46.054013   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:46.087896   67282 cri.go:89] found id: ""
	I1004 04:27:46.087921   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.087930   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:46.087936   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:46.087985   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:46.123441   67282 cri.go:89] found id: ""
	I1004 04:27:46.123465   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.123475   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:46.123481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:46.123545   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:46.159664   67282 cri.go:89] found id: ""
	I1004 04:27:46.159688   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.159698   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:46.159704   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:46.159761   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:46.195474   67282 cri.go:89] found id: ""
	I1004 04:27:46.195501   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.195512   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:46.195525   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:46.195569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:46.228670   67282 cri.go:89] found id: ""
	I1004 04:27:46.228693   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.228701   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:46.228706   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:46.228759   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:46.265278   67282 cri.go:89] found id: ""
	I1004 04:27:46.265303   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.265311   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:46.265325   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:46.265338   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:46.315135   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:46.315163   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:46.327765   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:46.327797   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:46.393157   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:46.393173   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:46.393184   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:46.473026   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:46.473058   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:46.049285   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:48.549053   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:49.647027   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:52.146841   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:50.122921   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:52.622913   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:49.011972   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:49.025718   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:49.025783   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:49.062749   67282 cri.go:89] found id: ""
	I1004 04:27:49.062774   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.062782   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:49.062788   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:49.062844   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:49.100838   67282 cri.go:89] found id: ""
	I1004 04:27:49.100886   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.100897   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:49.100904   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:49.100961   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:49.139966   67282 cri.go:89] found id: ""
	I1004 04:27:49.139990   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.140000   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:49.140007   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:49.140088   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:49.179347   67282 cri.go:89] found id: ""
	I1004 04:27:49.179373   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.179384   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:49.179391   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:49.179435   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:49.218086   67282 cri.go:89] found id: ""
	I1004 04:27:49.218112   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.218121   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:49.218127   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:49.218181   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:49.254779   67282 cri.go:89] found id: ""
	I1004 04:27:49.254811   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.254823   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:49.254830   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:49.254888   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:49.287351   67282 cri.go:89] found id: ""
	I1004 04:27:49.287381   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.287392   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:49.287398   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:49.287456   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:49.320051   67282 cri.go:89] found id: ""
	I1004 04:27:49.320078   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.320089   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:49.320100   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:49.320112   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:49.371270   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:49.371300   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:49.384403   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:49.384432   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:49.468132   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:49.468154   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:49.468167   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:49.543179   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:49.543211   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:52.093235   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:52.108446   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:52.108520   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:52.147590   67282 cri.go:89] found id: ""
	I1004 04:27:52.147613   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.147620   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:52.147626   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:52.147677   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:52.183066   67282 cri.go:89] found id: ""
	I1004 04:27:52.183095   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.183105   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:52.183112   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:52.183170   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:52.223109   67282 cri.go:89] found id: ""
	I1004 04:27:52.223140   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.223154   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:52.223165   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:52.223223   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:52.259547   67282 cri.go:89] found id: ""
	I1004 04:27:52.259573   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.259582   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:52.259587   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:52.259638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:52.296934   67282 cri.go:89] found id: ""
	I1004 04:27:52.296961   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.296971   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:52.296979   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:52.297040   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:52.331650   67282 cri.go:89] found id: ""
	I1004 04:27:52.331671   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.331679   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:52.331684   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:52.331728   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:52.365111   67282 cri.go:89] found id: ""
	I1004 04:27:52.365139   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.365150   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:52.365157   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:52.365239   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:52.400974   67282 cri.go:89] found id: ""
	I1004 04:27:52.401010   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.401023   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:52.401035   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:52.401049   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:52.484732   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:52.484771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:52.523322   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:52.523348   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:52.576671   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:52.576702   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:52.590263   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:52.590291   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:52.666646   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:50.549475   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:53.049259   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:54.646262   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.153196   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:55.123174   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.123932   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:55.166856   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:55.181481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:55.181562   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:55.218023   67282 cri.go:89] found id: ""
	I1004 04:27:55.218048   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.218056   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:55.218063   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:55.218121   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:55.256439   67282 cri.go:89] found id: ""
	I1004 04:27:55.256464   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.256472   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:55.256477   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:55.256531   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:55.294563   67282 cri.go:89] found id: ""
	I1004 04:27:55.294588   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.294596   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:55.294601   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:55.294656   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:55.331266   67282 cri.go:89] found id: ""
	I1004 04:27:55.331290   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.331300   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:55.331306   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:55.331370   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:55.367286   67282 cri.go:89] found id: ""
	I1004 04:27:55.367314   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.367325   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:55.367332   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:55.367391   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:55.402031   67282 cri.go:89] found id: ""
	I1004 04:27:55.402054   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.402062   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:55.402068   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:55.402122   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:55.437737   67282 cri.go:89] found id: ""
	I1004 04:27:55.437764   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.437774   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:55.437780   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:55.437842   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:55.470654   67282 cri.go:89] found id: ""
	I1004 04:27:55.470692   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.470704   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:55.470713   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:55.470726   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:55.521364   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:55.521393   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:55.534691   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:55.534716   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:55.600902   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:55.600923   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:55.600933   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:55.678896   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:55.678940   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:58.220086   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:58.234049   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:58.234110   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:58.281112   67282 cri.go:89] found id: ""
	I1004 04:27:58.281135   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.281143   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:58.281148   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:58.281191   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:58.320549   67282 cri.go:89] found id: ""
	I1004 04:27:58.320575   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.320584   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:58.320589   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:58.320635   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:58.355139   67282 cri.go:89] found id: ""
	I1004 04:27:58.355166   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.355174   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:58.355179   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:58.355225   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:58.387809   67282 cri.go:89] found id: ""
	I1004 04:27:58.387836   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.387846   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:58.387851   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:58.387908   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:58.420264   67282 cri.go:89] found id: ""
	I1004 04:27:58.420287   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.420295   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:58.420300   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:58.420349   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:58.455409   67282 cri.go:89] found id: ""
	I1004 04:27:58.455431   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.455438   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:58.455443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:58.455487   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:58.488708   67282 cri.go:89] found id: ""
	I1004 04:27:58.488734   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.488742   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:58.488749   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:58.488797   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:55.051622   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.548584   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:59.646699   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:01.648277   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:59.623008   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:02.122303   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:58.522139   67282 cri.go:89] found id: ""
	I1004 04:27:58.522161   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.522169   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:58.522176   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:58.522187   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:58.604653   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:58.604683   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:58.645141   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:58.645169   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:58.699716   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:58.699748   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:58.713197   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:58.713228   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:58.781998   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:01.282429   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:01.297266   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:01.297343   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:01.330421   67282 cri.go:89] found id: ""
	I1004 04:28:01.330446   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.330454   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:28:01.330459   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:01.330514   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:01.366960   67282 cri.go:89] found id: ""
	I1004 04:28:01.366983   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.366992   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:28:01.366998   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:01.367067   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:01.400886   67282 cri.go:89] found id: ""
	I1004 04:28:01.400910   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.400920   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:28:01.400931   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:01.400987   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:01.435556   67282 cri.go:89] found id: ""
	I1004 04:28:01.435586   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.435594   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:28:01.435601   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:01.435649   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:01.475772   67282 cri.go:89] found id: ""
	I1004 04:28:01.475810   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.475820   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:28:01.475826   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:01.475884   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:01.512380   67282 cri.go:89] found id: ""
	I1004 04:28:01.512403   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.512411   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:28:01.512417   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:01.512465   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:01.550488   67282 cri.go:89] found id: ""
	I1004 04:28:01.550517   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.550528   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:01.550536   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:28:01.550595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:28:01.586216   67282 cri.go:89] found id: ""
	I1004 04:28:01.586249   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.586261   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:28:01.586271   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:01.586285   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:01.640819   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:01.640860   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:01.656990   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:01.657020   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:28:01.731326   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:01.731354   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:01.731368   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:01.810007   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:28:01.810044   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:59.548748   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:01.551035   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.043116   66755 pod_ready.go:82] duration metric: took 4m0.000354814s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" ...
	E1004 04:28:04.043143   66755 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" (will not retry!)
	I1004 04:28:04.043167   66755 pod_ready.go:39] duration metric: took 4m15.403862245s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:04.043219   66755 kubeadm.go:597] duration metric: took 4m23.226496183s to restartPrimaryControlPlane
	W1004 04:28:04.043288   66755 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1004 04:28:04.043316   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:28:04.146697   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:06.147038   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:08.147201   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.122463   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:06.622379   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.352648   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:04.366150   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:04.366227   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:04.403272   67282 cri.go:89] found id: ""
	I1004 04:28:04.403298   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.403308   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:28:04.403315   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:04.403371   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:04.439237   67282 cri.go:89] found id: ""
	I1004 04:28:04.439269   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.439280   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:28:04.439287   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:04.439345   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:04.475532   67282 cri.go:89] found id: ""
	I1004 04:28:04.475558   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.475569   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:28:04.475576   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:04.475638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:04.511738   67282 cri.go:89] found id: ""
	I1004 04:28:04.511765   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.511775   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:28:04.511792   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:04.511850   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:04.553536   67282 cri.go:89] found id: ""
	I1004 04:28:04.553561   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.553568   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:28:04.553574   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:04.553625   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:04.589016   67282 cri.go:89] found id: ""
	I1004 04:28:04.589044   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.589053   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:28:04.589058   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:04.589106   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:04.622780   67282 cri.go:89] found id: ""
	I1004 04:28:04.622808   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.622817   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:04.622823   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:28:04.622879   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:28:04.662620   67282 cri.go:89] found id: ""
	I1004 04:28:04.662641   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.662649   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:28:04.662659   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:04.662669   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:04.717894   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:04.717928   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:04.732353   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:04.732385   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:28:04.806443   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:04.806469   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:04.806492   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:04.887684   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:28:04.887717   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:07.426630   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:07.440242   67282 kubeadm.go:597] duration metric: took 4m3.475062199s to restartPrimaryControlPlane
	W1004 04:28:07.440318   67282 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1004 04:28:07.440346   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:28:08.147532   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:08.162175   67282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:28:08.172013   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:28:08.181741   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:28:08.181757   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:28:08.181801   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:28:08.191002   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:28:08.191046   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:28:08.200929   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:28:08.210241   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:28:08.210286   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:28:08.219693   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:28:08.229497   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:28:08.229534   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:28:08.239583   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:28:08.249207   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:28:08.249252   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:28:08.258516   67282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:28:08.328054   67282 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:28:08.328132   67282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:28:08.472265   67282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:28:08.472420   67282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:28:08.472543   67282 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:28:08.655873   67282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:28:08.657726   67282 out.go:235]   - Generating certificates and keys ...
	I1004 04:28:08.657817   67282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:28:08.657876   67282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:28:08.657942   67282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:28:08.658034   67282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:28:08.658149   67282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:28:08.658235   67282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:28:08.658309   67282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:28:08.658396   67282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:28:08.658503   67282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:28:08.658600   67282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:28:08.658651   67282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:28:08.658707   67282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:28:08.706486   67282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:28:08.909036   67282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:28:09.285968   67282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:28:09.499963   67282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:28:09.516914   67282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:28:09.517832   67282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:28:09.517900   67282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:28:09.664925   67282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:28:10.147391   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:12.646012   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:09.121686   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:11.123086   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:13.123578   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:09.666691   67282 out.go:235]   - Booting up control plane ...
	I1004 04:28:09.666889   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:28:09.671298   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:28:09.672046   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:28:09.672956   67282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:28:09.685069   67282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:28:14.646614   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:16.646683   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:15.125374   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:17.125685   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:18.646777   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:21.147299   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:19.623872   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:22.123077   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:23.646460   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:25.647096   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:28.147324   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:24.623730   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:27.123516   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:30.379460   66755 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.336110507s)
	I1004 04:28:30.379544   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:30.395622   66755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:28:30.406790   66755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:28:30.417380   66755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:28:30.417408   66755 kubeadm.go:157] found existing configuration files:
	
	I1004 04:28:30.417458   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:28:30.427925   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:28:30.427993   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:28:30.438694   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:28:30.448898   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:28:30.448972   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:28:30.459463   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:28:30.469227   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:28:30.469281   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:28:30.479979   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:28:30.489873   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:28:30.489936   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:28:30.499999   66755 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:28:30.549707   66755 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 04:28:30.549771   66755 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:28:30.663468   66755 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:28:30.663595   66755 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:28:30.663698   66755 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 04:28:30.675750   66755 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:28:30.677655   66755 out.go:235]   - Generating certificates and keys ...
	I1004 04:28:30.677760   66755 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:28:30.677868   66755 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:28:30.678010   66755 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:28:30.678102   66755 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:28:30.678217   66755 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:28:30.678289   66755 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:28:30.678378   66755 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:28:30.678470   66755 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:28:30.678566   66755 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:28:30.678732   66755 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:28:30.679295   66755 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:28:30.679383   66755 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:28:30.826979   66755 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:28:30.900919   66755 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 04:28:31.098221   66755 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:28:31.243668   66755 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:28:31.411766   66755 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:28:31.412181   66755 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:28:31.414652   66755 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:28:30.646927   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:32.647767   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:29.129148   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:31.623284   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:31.416504   66755 out.go:235]   - Booting up control plane ...
	I1004 04:28:31.416620   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:28:31.416730   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:28:31.418284   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:28:31.437379   66755 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:28:31.443450   66755 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:28:31.443505   66755 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:28:31.586540   66755 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 04:28:31.586706   66755 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 04:28:32.088382   66755 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.195244ms
	I1004 04:28:32.088510   66755 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 04:28:37.090291   66755 kubeadm.go:310] [api-check] The API server is healthy after 5.001756025s
	I1004 04:28:37.103845   66755 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 04:28:37.127230   66755 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 04:28:37.156917   66755 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 04:28:37.157181   66755 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-934812 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 04:28:37.171399   66755 kubeadm.go:310] [bootstrap-token] Using token: 1wt5ey.lvccf2aeyngf9mt3
	I1004 04:28:34.648249   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:37.148680   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:33.623901   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:36.122762   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:38.123147   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:37.172939   66755 out.go:235]   - Configuring RBAC rules ...
	I1004 04:28:37.173086   66755 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 04:28:37.179454   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 04:28:37.188765   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 04:28:37.192599   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 04:28:37.200359   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 04:28:37.204872   66755 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 04:28:37.498753   66755 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 04:28:37.931621   66755 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 04:28:38.497855   66755 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 04:28:38.498949   66755 kubeadm.go:310] 
	I1004 04:28:38.499023   66755 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 04:28:38.499055   66755 kubeadm.go:310] 
	I1004 04:28:38.499183   66755 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 04:28:38.499195   66755 kubeadm.go:310] 
	I1004 04:28:38.499229   66755 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 04:28:38.499316   66755 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 04:28:38.499385   66755 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 04:28:38.499393   66755 kubeadm.go:310] 
	I1004 04:28:38.499481   66755 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 04:28:38.499498   66755 kubeadm.go:310] 
	I1004 04:28:38.499563   66755 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 04:28:38.499571   66755 kubeadm.go:310] 
	I1004 04:28:38.499653   66755 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 04:28:38.499742   66755 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 04:28:38.499871   66755 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 04:28:38.499888   66755 kubeadm.go:310] 
	I1004 04:28:38.499994   66755 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 04:28:38.500104   66755 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 04:28:38.500115   66755 kubeadm.go:310] 
	I1004 04:28:38.500220   66755 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1wt5ey.lvccf2aeyngf9mt3 \
	I1004 04:28:38.500350   66755 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 \
	I1004 04:28:38.500387   66755 kubeadm.go:310] 	--control-plane 
	I1004 04:28:38.500402   66755 kubeadm.go:310] 
	I1004 04:28:38.500478   66755 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 04:28:38.500484   66755 kubeadm.go:310] 
	I1004 04:28:38.500563   66755 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1wt5ey.lvccf2aeyngf9mt3 \
	I1004 04:28:38.500686   66755 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 
	I1004 04:28:38.501820   66755 kubeadm.go:310] W1004 04:28:30.522396    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:28:38.502147   66755 kubeadm.go:310] W1004 04:28:30.524006    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:28:38.502282   66755 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:28:38.502311   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:28:38.502321   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:28:38.504185   66755 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:28:38.505600   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:28:38.518746   66755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:28:38.541311   66755 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:28:38.541422   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:38.541460   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-934812 minikube.k8s.io/updated_at=2024_10_04T04_28_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=embed-certs-934812 minikube.k8s.io/primary=true
	I1004 04:28:38.605537   66755 ops.go:34] apiserver oom_adj: -16
	I1004 04:28:38.765084   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:39.646916   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:41.651456   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:39.265365   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:39.765925   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:40.265135   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:40.766204   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:41.265734   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:41.765404   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:42.265993   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:42.765826   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:43.265776   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:43.353243   66755 kubeadm.go:1113] duration metric: took 4.811892444s to wait for elevateKubeSystemPrivileges
	I1004 04:28:43.353288   66755 kubeadm.go:394] duration metric: took 5m2.586827656s to StartCluster
	I1004 04:28:43.353313   66755 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:28:43.353402   66755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:28:43.355058   66755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:28:43.355309   66755 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:28:43.355388   66755 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:28:43.355533   66755 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-934812"
	I1004 04:28:43.355542   66755 addons.go:69] Setting default-storageclass=true in profile "embed-certs-934812"
	I1004 04:28:43.355556   66755 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-934812"
	I1004 04:28:43.355563   66755 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-934812"
	W1004 04:28:43.355568   66755 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:28:43.355584   66755 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:28:43.355598   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.355639   66755 addons.go:69] Setting metrics-server=true in profile "embed-certs-934812"
	I1004 04:28:43.355658   66755 addons.go:234] Setting addon metrics-server=true in "embed-certs-934812"
	W1004 04:28:43.355666   66755 addons.go:243] addon metrics-server should already be in state true
	I1004 04:28:43.355694   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.356024   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356075   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356095   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.356108   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.356075   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356173   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.357087   66755 out.go:177] * Verifying Kubernetes components...
	I1004 04:28:43.358428   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:28:43.373646   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45715
	I1004 04:28:43.373874   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I1004 04:28:43.374344   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.374344   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.374927   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.374948   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.375003   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.375027   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.375285   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.375342   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.375499   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.375884   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.375928   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.376269   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I1004 04:28:43.376636   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.377073   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.377099   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.377455   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.377883   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.377918   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.378402   66755 addons.go:234] Setting addon default-storageclass=true in "embed-certs-934812"
	W1004 04:28:43.378420   66755 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:28:43.378447   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.378705   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.378734   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.394001   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I1004 04:28:43.394289   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I1004 04:28:43.394645   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.394760   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.395195   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.395213   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.395302   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.395317   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.395596   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.395626   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.395842   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.396120   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.396160   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.397590   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.399391   66755 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:28:43.400581   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:28:43.400598   66755 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:28:43.400619   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.405134   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.405778   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I1004 04:28:43.405968   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.405996   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.406230   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.406383   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.406428   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.406571   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.406698   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.406825   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.406847   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.407455   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.407600   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.409278   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.411006   66755 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:28:40.622426   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:42.623400   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:43.412106   66755 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:28:43.412124   66755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:28:43.412389   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.414167   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I1004 04:28:43.414796   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.415285   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.415309   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.415657   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.415710   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.415911   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.416195   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.416217   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.416440   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.416628   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.416759   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.416856   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.418235   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.418426   66755 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:28:43.418436   66755 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:28:43.418456   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.421305   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.421761   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.421779   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.421966   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.422654   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.422789   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.422877   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.580648   66755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:28:43.615728   66755 node_ready.go:35] waiting up to 6m0s for node "embed-certs-934812" to be "Ready" ...
	I1004 04:28:43.625558   66755 node_ready.go:49] node "embed-certs-934812" has status "Ready":"True"
	I1004 04:28:43.625600   66755 node_ready.go:38] duration metric: took 9.827384ms for node "embed-certs-934812" to be "Ready" ...
	I1004 04:28:43.625612   66755 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:43.634425   66755 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:43.748926   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:28:43.774727   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:28:43.781558   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:28:43.781589   66755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:28:43.838039   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:28:43.838067   66755 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:28:43.945364   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:28:43.945392   66755 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:28:44.005000   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:28:44.253491   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.253521   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.253828   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.253896   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.253910   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.253925   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.253938   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.254130   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.254149   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.254164   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.267367   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.267396   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.267680   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.267700   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.864663   66755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089890385s)
	I1004 04:28:44.864722   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.864734   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.865046   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.865070   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.865086   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.865095   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.866872   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.866877   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.866907   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.138868   66755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133828074s)
	I1004 04:28:45.138926   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:45.138942   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:45.139243   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:45.139265   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.139276   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:45.139283   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:45.139484   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:45.139497   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.139507   66755 addons.go:475] Verifying addon metrics-server=true in "embed-certs-934812"
	I1004 04:28:45.141046   66755 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 04:28:44.147013   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:44.648117   67541 pod_ready.go:82] duration metric: took 4m0.007930603s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	E1004 04:28:44.648144   67541 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 04:28:44.648154   67541 pod_ready.go:39] duration metric: took 4m7.419382357s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:44.648170   67541 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:28:44.648200   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:44.648256   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:44.712473   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:44.712500   67541 cri.go:89] found id: ""
	I1004 04:28:44.712510   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:44.712568   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.717619   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:44.717688   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:44.760036   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:44.760061   67541 cri.go:89] found id: ""
	I1004 04:28:44.760071   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:44.760124   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.766402   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:44.766465   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:44.821766   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:44.821792   67541 cri.go:89] found id: ""
	I1004 04:28:44.821801   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:44.821858   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.826315   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:44.826370   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:44.873526   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:44.873547   67541 cri.go:89] found id: ""
	I1004 04:28:44.873556   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:44.873615   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.878375   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:44.878442   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:44.920240   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:44.920261   67541 cri.go:89] found id: ""
	I1004 04:28:44.920270   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:44.920322   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.925102   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:44.925158   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:44.967386   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:44.967406   67541 cri.go:89] found id: ""
	I1004 04:28:44.967416   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:44.967471   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.971979   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:44.972056   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:45.009842   67541 cri.go:89] found id: ""
	I1004 04:28:45.009869   67541 logs.go:282] 0 containers: []
	W1004 04:28:45.009881   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:45.009890   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:45.009952   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:45.055166   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:45.055189   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:45.055194   67541 cri.go:89] found id: ""
	I1004 04:28:45.055201   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:45.055258   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:45.060362   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:45.066118   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:45.066351   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:45.128185   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:45.128221   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:45.270042   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:45.270084   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:45.309065   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:45.309093   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:45.352299   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:45.352327   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:45.401846   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:45.401882   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:45.447474   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:45.447530   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:45.500734   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:45.500765   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:46.040224   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:46.040275   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:46.112675   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:46.112716   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:46.128530   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:46.128553   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:46.175007   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:46.175039   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:46.222706   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:46.222738   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:44.623804   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:47.122548   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:45.142166   66755 addons.go:510] duration metric: took 1.786788452s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 04:28:45.642731   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:46.641705   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.641730   66755 pod_ready.go:82] duration metric: took 3.007270041s for pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.641743   66755 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.646744   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.646767   66755 pod_ready.go:82] duration metric: took 5.01485ms for pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.646777   66755 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.652554   66755 pod_ready.go:93] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.652572   66755 pod_ready.go:82] duration metric: took 5.78883ms for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.652580   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:48.659404   66755 pod_ready.go:103] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:51.158765   66755 pod_ready.go:93] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.158787   66755 pod_ready.go:82] duration metric: took 4.506200726s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.158796   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.162949   66755 pod_ready.go:93] pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.162967   66755 pod_ready.go:82] duration metric: took 4.16468ms for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.162975   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9czbc" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.167309   66755 pod_ready.go:93] pod "kube-proxy-9czbc" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.167327   66755 pod_ready.go:82] duration metric: took 4.347415ms for pod "kube-proxy-9czbc" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.167334   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.171048   66755 pod_ready.go:93] pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.171065   66755 pod_ready.go:82] duration metric: took 3.724785ms for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.171071   66755 pod_ready.go:39] duration metric: took 7.545445402s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:51.171083   66755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:28:51.171126   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:51.186751   66755 api_server.go:72] duration metric: took 7.831380288s to wait for apiserver process to appear ...
	I1004 04:28:51.186782   66755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:28:51.186799   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:28:51.192753   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 200:
	ok
	I1004 04:28:51.194259   66755 api_server.go:141] control plane version: v1.31.1
	I1004 04:28:51.194284   66755 api_server.go:131] duration metric: took 7.491456ms to wait for apiserver health ...
	I1004 04:28:51.194292   66755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:28:51.241469   66755 system_pods.go:59] 9 kube-system pods found
	I1004 04:28:51.241491   66755 system_pods.go:61] "coredns-7c65d6cfc9-h5tbr" [87deb61f-2ce4-4d45-91da-c16557b5ef75] Running
	I1004 04:28:51.241496   66755 system_pods.go:61] "coredns-7c65d6cfc9-p52s6" [b9b3cd7f-f28e-4502-a55d-7792cfa5a6fe] Running
	I1004 04:28:51.241500   66755 system_pods.go:61] "etcd-embed-certs-934812" [487917e4-1b38-4b84-baf9-eacc1113718e] Running
	I1004 04:28:51.241503   66755 system_pods.go:61] "kube-apiserver-embed-certs-934812" [7fcfc483-3c53-415d-8329-5cae1ecd022f] Running
	I1004 04:28:51.241507   66755 system_pods.go:61] "kube-controller-manager-embed-certs-934812" [9ab25b16-916b-49f7-b34d-a12cf8552769] Running
	I1004 04:28:51.241514   66755 system_pods.go:61] "kube-proxy-9czbc" [dedff5a2-62b6-49c3-8369-9182d1c5bf7a] Running
	I1004 04:28:51.241517   66755 system_pods.go:61] "kube-scheduler-embed-certs-934812" [88a5acd5-108f-482a-ac24-7b75a30428ff] Running
	I1004 04:28:51.241525   66755 system_pods.go:61] "metrics-server-6867b74b74-fh2lk" [12e3e884-2ad3-4eaa-a505-822717e5bc8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:51.241528   66755 system_pods.go:61] "storage-provisioner" [67b4ef22-068c-4d14-840e-deab91c5ab94] Running
	I1004 04:28:51.241534   66755 system_pods.go:74] duration metric: took 47.237476ms to wait for pod list to return data ...
	I1004 04:28:51.241541   66755 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:28:51.438932   66755 default_sa.go:45] found service account: "default"
	I1004 04:28:51.438957   66755 default_sa.go:55] duration metric: took 197.410206ms for default service account to be created ...
	I1004 04:28:51.438966   66755 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:28:51.642064   66755 system_pods.go:86] 9 kube-system pods found
	I1004 04:28:51.642091   66755 system_pods.go:89] "coredns-7c65d6cfc9-h5tbr" [87deb61f-2ce4-4d45-91da-c16557b5ef75] Running
	I1004 04:28:51.642095   66755 system_pods.go:89] "coredns-7c65d6cfc9-p52s6" [b9b3cd7f-f28e-4502-a55d-7792cfa5a6fe] Running
	I1004 04:28:51.642100   66755 system_pods.go:89] "etcd-embed-certs-934812" [487917e4-1b38-4b84-baf9-eacc1113718e] Running
	I1004 04:28:51.642103   66755 system_pods.go:89] "kube-apiserver-embed-certs-934812" [7fcfc483-3c53-415d-8329-5cae1ecd022f] Running
	I1004 04:28:51.642107   66755 system_pods.go:89] "kube-controller-manager-embed-certs-934812" [9ab25b16-916b-49f7-b34d-a12cf8552769] Running
	I1004 04:28:51.642111   66755 system_pods.go:89] "kube-proxy-9czbc" [dedff5a2-62b6-49c3-8369-9182d1c5bf7a] Running
	I1004 04:28:51.642115   66755 system_pods.go:89] "kube-scheduler-embed-certs-934812" [88a5acd5-108f-482a-ac24-7b75a30428ff] Running
	I1004 04:28:51.642121   66755 system_pods.go:89] "metrics-server-6867b74b74-fh2lk" [12e3e884-2ad3-4eaa-a505-822717e5bc8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:51.642124   66755 system_pods.go:89] "storage-provisioner" [67b4ef22-068c-4d14-840e-deab91c5ab94] Running
	I1004 04:28:51.642133   66755 system_pods.go:126] duration metric: took 203.1616ms to wait for k8s-apps to be running ...
	I1004 04:28:51.642139   66755 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:28:51.642176   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:51.658916   66755 system_svc.go:56] duration metric: took 16.763146ms WaitForService to wait for kubelet
	I1004 04:28:51.658948   66755 kubeadm.go:582] duration metric: took 8.303579518s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:28:51.658964   66755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:28:51.839048   66755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:28:51.839067   66755 node_conditions.go:123] node cpu capacity is 2
	I1004 04:28:51.839076   66755 node_conditions.go:105] duration metric: took 180.108785ms to run NodePressure ...
	I1004 04:28:51.839086   66755 start.go:241] waiting for startup goroutines ...
	I1004 04:28:51.839093   66755 start.go:246] waiting for cluster config update ...
	I1004 04:28:51.839103   66755 start.go:255] writing updated cluster config ...
	I1004 04:28:51.839343   66755 ssh_runner.go:195] Run: rm -f paused
	I1004 04:28:51.887283   66755 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:28:51.889326   66755 out.go:177] * Done! kubectl is now configured to use "embed-certs-934812" cluster and "default" namespace by default
	I1004 04:28:48.765066   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:48.780955   67541 api_server.go:72] duration metric: took 4m18.802753607s to wait for apiserver process to appear ...
	I1004 04:28:48.780988   67541 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:28:48.781022   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:48.781074   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:48.817315   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:48.817337   67541 cri.go:89] found id: ""
	I1004 04:28:48.817346   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:48.817406   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.821619   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:48.821676   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:48.860019   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:48.860043   67541 cri.go:89] found id: ""
	I1004 04:28:48.860052   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:48.860101   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.864005   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:48.864065   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:48.901273   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:48.901295   67541 cri.go:89] found id: ""
	I1004 04:28:48.901303   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:48.901353   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.905950   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:48.906007   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:48.939708   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:48.939735   67541 cri.go:89] found id: ""
	I1004 04:28:48.939745   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:48.939812   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.943625   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:48.943692   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:48.979452   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:48.979481   67541 cri.go:89] found id: ""
	I1004 04:28:48.979490   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:48.979550   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.983629   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:48.983692   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:49.021137   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:49.021160   67541 cri.go:89] found id: ""
	I1004 04:28:49.021169   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:49.021242   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.025644   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:49.025712   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:49.062410   67541 cri.go:89] found id: ""
	I1004 04:28:49.062437   67541 logs.go:282] 0 containers: []
	W1004 04:28:49.062447   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:49.062452   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:49.062499   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:49.098959   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:49.098990   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:49.098996   67541 cri.go:89] found id: ""
	I1004 04:28:49.099005   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:49.099067   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.103474   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.107824   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:49.107852   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:49.228249   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:49.228278   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:49.269454   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:49.269479   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:49.305639   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:49.305666   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:49.770318   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:49.770348   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:49.808468   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:49.808493   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:49.884965   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:49.884997   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:49.901874   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:49.901898   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:49.952844   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:49.952869   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:49.986100   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:49.986141   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:50.023082   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:50.023108   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:50.074848   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:50.074876   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:50.112513   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:50.112541   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:52.658644   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:28:52.663076   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 200:
	ok
	I1004 04:28:52.663997   67541 api_server.go:141] control plane version: v1.31.1
	I1004 04:28:52.664017   67541 api_server.go:131] duration metric: took 3.8830221s to wait for apiserver health ...
	I1004 04:28:52.664024   67541 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:28:52.664045   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:52.664085   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:52.704174   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:52.704193   67541 cri.go:89] found id: ""
	I1004 04:28:52.704200   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:52.704253   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.708388   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:52.708438   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:52.743028   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:52.743053   67541 cri.go:89] found id: ""
	I1004 04:28:52.743062   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:52.743108   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.747354   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:52.747405   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:52.782350   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:52.782373   67541 cri.go:89] found id: ""
	I1004 04:28:52.782382   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:52.782424   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.786336   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:52.786394   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:52.826929   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:52.826950   67541 cri.go:89] found id: ""
	I1004 04:28:52.826958   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:52.827018   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.831039   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:52.831094   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:52.865963   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:52.865984   67541 cri.go:89] found id: ""
	I1004 04:28:52.865992   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:52.866032   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.869982   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:52.870024   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:52.919060   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:52.919081   67541 cri.go:89] found id: ""
	I1004 04:28:52.919091   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:52.919139   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.923080   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:52.923131   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:52.962615   67541 cri.go:89] found id: ""
	I1004 04:28:52.962636   67541 logs.go:282] 0 containers: []
	W1004 04:28:52.962643   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:52.962649   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:52.962706   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:52.999914   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:52.999936   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:52.999940   67541 cri.go:89] found id: ""
	I1004 04:28:52.999947   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:52.999998   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:53.003894   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:53.007759   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:53.007776   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:53.021269   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:53.021289   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:53.088683   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:53.088711   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:53.127363   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:53.127387   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:53.163467   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:53.163490   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:53.212683   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:53.212717   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:49.123892   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:51.124121   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:53.124323   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:49.686881   67282 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:28:49.687234   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:28:49.687487   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:28:53.569320   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:53.569360   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:53.644197   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:53.644231   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:53.747465   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:53.747497   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:53.788761   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:53.788798   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:53.822705   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:53.822737   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:53.857525   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:53.857548   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:53.894880   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:53.894904   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:56.455254   67541 system_pods.go:59] 8 kube-system pods found
	I1004 04:28:56.455286   67541 system_pods.go:61] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running
	I1004 04:28:56.455293   67541 system_pods.go:61] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running
	I1004 04:28:56.455299   67541 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running
	I1004 04:28:56.455304   67541 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running
	I1004 04:28:56.455309   67541 system_pods.go:61] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:28:56.455314   67541 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running
	I1004 04:28:56.455322   67541 system_pods.go:61] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:56.455329   67541 system_pods.go:61] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:28:56.455338   67541 system_pods.go:74] duration metric: took 3.791308758s to wait for pod list to return data ...
	I1004 04:28:56.455347   67541 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:28:56.457799   67541 default_sa.go:45] found service account: "default"
	I1004 04:28:56.457817   67541 default_sa.go:55] duration metric: took 2.463452ms for default service account to be created ...
	I1004 04:28:56.457825   67541 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:28:56.462569   67541 system_pods.go:86] 8 kube-system pods found
	I1004 04:28:56.462593   67541 system_pods.go:89] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running
	I1004 04:28:56.462601   67541 system_pods.go:89] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running
	I1004 04:28:56.462608   67541 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running
	I1004 04:28:56.462615   67541 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running
	I1004 04:28:56.462620   67541 system_pods.go:89] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:28:56.462626   67541 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running
	I1004 04:28:56.462632   67541 system_pods.go:89] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:56.462637   67541 system_pods.go:89] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:28:56.462645   67541 system_pods.go:126] duration metric: took 4.814032ms to wait for k8s-apps to be running ...
	I1004 04:28:56.462657   67541 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:28:56.462749   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:56.478944   67541 system_svc.go:56] duration metric: took 16.282384ms WaitForService to wait for kubelet
	I1004 04:28:56.478966   67541 kubeadm.go:582] duration metric: took 4m26.500769346s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:28:56.478982   67541 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:28:56.481946   67541 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:28:56.481968   67541 node_conditions.go:123] node cpu capacity is 2
	I1004 04:28:56.481980   67541 node_conditions.go:105] duration metric: took 2.992423ms to run NodePressure ...
	I1004 04:28:56.481993   67541 start.go:241] waiting for startup goroutines ...
	I1004 04:28:56.482006   67541 start.go:246] waiting for cluster config update ...
	I1004 04:28:56.482018   67541 start.go:255] writing updated cluster config ...
	I1004 04:28:56.482450   67541 ssh_runner.go:195] Run: rm -f paused
	I1004 04:28:56.528299   67541 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:28:56.530289   67541 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-281471" cluster and "default" namespace by default
	I1004 04:28:55.625569   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:58.122544   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:54.687773   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:28:54.688026   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:29:00.124374   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:02.624622   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:05.123726   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:07.622036   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:04.688599   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:29:04.688808   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:29:09.623060   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:11.623590   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:12.123919   66293 pod_ready.go:82] duration metric: took 4m0.007496621s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	E1004 04:29:12.123939   66293 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 04:29:12.123946   66293 pod_ready.go:39] duration metric: took 4m3.607239118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:29:12.123960   66293 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:29:12.123985   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:12.124023   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:12.174748   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:12.174767   66293 cri.go:89] found id: ""
	I1004 04:29:12.174775   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:12.174823   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.179374   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:12.179436   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:12.219617   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:12.219637   66293 cri.go:89] found id: ""
	I1004 04:29:12.219646   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:12.219699   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.223774   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:12.223844   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:12.261339   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:12.261360   66293 cri.go:89] found id: ""
	I1004 04:29:12.261369   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:12.261424   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.265364   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:12.265414   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:12.313178   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:12.313197   66293 cri.go:89] found id: ""
	I1004 04:29:12.313206   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:12.313271   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.317440   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:12.317498   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:12.353037   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:12.353054   66293 cri.go:89] found id: ""
	I1004 04:29:12.353072   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:12.353125   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.357212   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:12.357272   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:12.392082   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:12.392106   66293 cri.go:89] found id: ""
	I1004 04:29:12.392115   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:12.392167   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.396333   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:12.396395   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:12.439298   66293 cri.go:89] found id: ""
	I1004 04:29:12.439329   66293 logs.go:282] 0 containers: []
	W1004 04:29:12.439337   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:12.439343   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:12.439387   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:12.478798   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:12.478814   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:12.478818   66293 cri.go:89] found id: ""
	I1004 04:29:12.478824   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:12.478866   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.483035   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.486977   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:12.486992   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:12.520849   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:12.520875   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:13.072628   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:13.072671   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:13.137973   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:13.138000   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:13.259585   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:13.259611   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:13.312315   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:13.312340   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:13.352351   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:13.352377   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:13.391319   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:13.391352   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:13.430681   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:13.430712   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:13.464929   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:13.464957   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:13.505312   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:13.505340   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:13.520476   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:13.520517   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:13.582723   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:13.582752   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:16.131437   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:29:16.150426   66293 api_server.go:72] duration metric: took 4m14.921074088s to wait for apiserver process to appear ...
	I1004 04:29:16.150457   66293 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:29:16.150498   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:16.150559   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:16.197236   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:16.197265   66293 cri.go:89] found id: ""
	I1004 04:29:16.197275   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:16.197341   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.202103   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:16.202187   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:16.236881   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:16.236907   66293 cri.go:89] found id: ""
	I1004 04:29:16.236916   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:16.236976   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.241220   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:16.241289   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:16.275727   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:16.275750   66293 cri.go:89] found id: ""
	I1004 04:29:16.275759   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:16.275828   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.280282   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:16.280352   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:16.320297   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:16.320323   66293 cri.go:89] found id: ""
	I1004 04:29:16.320332   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:16.320386   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.324982   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:16.325038   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:16.367062   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:16.367081   66293 cri.go:89] found id: ""
	I1004 04:29:16.367089   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:16.367143   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.371124   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:16.371182   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:16.405706   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:16.405728   66293 cri.go:89] found id: ""
	I1004 04:29:16.405738   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:16.405785   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.410027   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:16.410084   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:16.444937   66293 cri.go:89] found id: ""
	I1004 04:29:16.444961   66293 logs.go:282] 0 containers: []
	W1004 04:29:16.444971   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:16.444978   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:16.445032   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:16.480123   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:16.480153   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:16.480160   66293 cri.go:89] found id: ""
	I1004 04:29:16.480168   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:16.480228   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.484216   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.488156   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:16.488177   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:16.501573   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:16.501591   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:16.600789   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:16.600814   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:16.641604   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:16.641634   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:16.696735   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:16.696764   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:16.737153   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:16.737177   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:17.188490   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:17.188546   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:17.262072   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:17.262108   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:17.310881   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:17.310911   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:17.356105   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:17.356135   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:17.398916   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:17.398948   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:17.440122   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:17.440149   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:17.482529   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:17.482553   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:20.034163   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:29:20.039165   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I1004 04:29:20.040105   66293 api_server.go:141] control plane version: v1.31.1
	I1004 04:29:20.040124   66293 api_server.go:131] duration metric: took 3.889660333s to wait for apiserver health ...
	I1004 04:29:20.040131   66293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:29:20.040156   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:20.040203   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:20.078208   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:20.078234   66293 cri.go:89] found id: ""
	I1004 04:29:20.078244   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:20.078306   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.082751   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:20.082808   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:20.128002   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:20.128024   66293 cri.go:89] found id: ""
	I1004 04:29:20.128034   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:20.128084   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.132039   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:20.132097   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:20.171887   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:20.171911   66293 cri.go:89] found id: ""
	I1004 04:29:20.171921   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:20.171978   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.176095   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:20.176150   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:20.215155   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:20.215175   66293 cri.go:89] found id: ""
	I1004 04:29:20.215183   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:20.215241   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.219738   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:20.219814   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:20.256116   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:20.256134   66293 cri.go:89] found id: ""
	I1004 04:29:20.256142   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:20.256194   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.261201   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:20.261281   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:20.302328   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:20.302350   66293 cri.go:89] found id: ""
	I1004 04:29:20.302359   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:20.302414   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.306488   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:20.306551   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:20.341266   66293 cri.go:89] found id: ""
	I1004 04:29:20.341290   66293 logs.go:282] 0 containers: []
	W1004 04:29:20.341300   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:20.341307   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:20.341361   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:20.379560   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:20.379584   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:20.379589   66293 cri.go:89] found id: ""
	I1004 04:29:20.379598   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:20.379653   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.383816   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.388118   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:20.388137   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:20.487661   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:20.487686   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:20.539728   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:20.539754   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:20.577435   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:20.577463   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:20.616450   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:20.616480   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:20.658292   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:20.658316   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:20.733483   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:20.733515   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:20.749004   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:20.749033   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:20.799355   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:20.799383   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:20.839676   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:20.839699   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:20.874870   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:20.874896   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:20.912635   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:20.912658   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:20.968377   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:20.968405   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:23.820462   66293 system_pods.go:59] 8 kube-system pods found
	I1004 04:29:23.820491   66293 system_pods.go:61] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running
	I1004 04:29:23.820497   66293 system_pods.go:61] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running
	I1004 04:29:23.820501   66293 system_pods.go:61] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running
	I1004 04:29:23.820506   66293 system_pods.go:61] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running
	I1004 04:29:23.820514   66293 system_pods.go:61] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running
	I1004 04:29:23.820517   66293 system_pods.go:61] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running
	I1004 04:29:23.820524   66293 system_pods.go:61] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:29:23.820529   66293 system_pods.go:61] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running
	I1004 04:29:23.820537   66293 system_pods.go:74] duration metric: took 3.780400092s to wait for pod list to return data ...
	I1004 04:29:23.820544   66293 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:29:23.823119   66293 default_sa.go:45] found service account: "default"
	I1004 04:29:23.823137   66293 default_sa.go:55] duration metric: took 2.58707ms for default service account to be created ...
	I1004 04:29:23.823144   66293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:29:23.827365   66293 system_pods.go:86] 8 kube-system pods found
	I1004 04:29:23.827385   66293 system_pods.go:89] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running
	I1004 04:29:23.827389   66293 system_pods.go:89] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running
	I1004 04:29:23.827393   66293 system_pods.go:89] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running
	I1004 04:29:23.827397   66293 system_pods.go:89] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running
	I1004 04:29:23.827400   66293 system_pods.go:89] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running
	I1004 04:29:23.827405   66293 system_pods.go:89] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running
	I1004 04:29:23.827410   66293 system_pods.go:89] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:29:23.827415   66293 system_pods.go:89] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running
	I1004 04:29:23.827422   66293 system_pods.go:126] duration metric: took 4.27475ms to wait for k8s-apps to be running ...
	I1004 04:29:23.827428   66293 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:29:23.827468   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:29:23.844696   66293 system_svc.go:56] duration metric: took 17.261418ms WaitForService to wait for kubelet
	I1004 04:29:23.844724   66293 kubeadm.go:582] duration metric: took 4m22.61537826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:29:23.844746   66293 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:29:23.847873   66293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:29:23.847892   66293 node_conditions.go:123] node cpu capacity is 2
	I1004 04:29:23.847902   66293 node_conditions.go:105] duration metric: took 3.149916ms to run NodePressure ...
	I1004 04:29:23.847915   66293 start.go:241] waiting for startup goroutines ...
	I1004 04:29:23.847923   66293 start.go:246] waiting for cluster config update ...
	I1004 04:29:23.847932   66293 start.go:255] writing updated cluster config ...
	I1004 04:29:23.848202   66293 ssh_runner.go:195] Run: rm -f paused
	I1004 04:29:23.894092   66293 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:29:23.895736   66293 out.go:177] * Done! kubectl is now configured to use "no-preload-658545" cluster and "default" namespace by default
	I1004 04:29:24.690241   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:29:24.690419   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:04.692816   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:04.693091   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:04.693114   67282 kubeadm.go:310] 
	I1004 04:30:04.693149   67282 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:30:04.693214   67282 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:30:04.693236   67282 kubeadm.go:310] 
	I1004 04:30:04.693295   67282 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:30:04.693327   67282 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:30:04.693451   67282 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:30:04.693460   67282 kubeadm.go:310] 
	I1004 04:30:04.693568   67282 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:30:04.693614   67282 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:30:04.693668   67282 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:30:04.693688   67282 kubeadm.go:310] 
	I1004 04:30:04.693843   67282 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:30:04.693966   67282 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:30:04.693982   67282 kubeadm.go:310] 
	I1004 04:30:04.694097   67282 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:30:04.694218   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:30:04.694305   67282 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:30:04.694387   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:30:04.694399   67282 kubeadm.go:310] 
	I1004 04:30:04.695379   67282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:30:04.695478   67282 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:30:04.695566   67282 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1004 04:30:04.695695   67282 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1004 04:30:04.695742   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:30:05.153635   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:30:05.170057   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:30:05.179541   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:30:05.179563   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:30:05.179611   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:30:05.188969   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:30:05.189025   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:30:05.198049   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:30:05.207031   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:30:05.207118   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:30:05.216934   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:30:05.226477   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:30:05.226541   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:30:05.236222   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:30:05.245314   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:30:05.245374   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:30:05.255762   67282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:30:05.329816   67282 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:30:05.329953   67282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:30:05.482342   67282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:30:05.482549   67282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:30:05.482692   67282 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:30:05.666400   67282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:30:05.668115   67282 out.go:235]   - Generating certificates and keys ...
	I1004 04:30:05.668217   67282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:30:05.668319   67282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:30:05.668460   67282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:30:05.668562   67282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:30:05.668660   67282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:30:05.668734   67282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:30:05.668823   67282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:30:05.668905   67282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:30:05.669010   67282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:30:05.669130   67282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:30:05.669186   67282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:30:05.669269   67282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:30:05.773446   67282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:30:05.823736   67282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:30:05.951294   67282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:30:06.250340   67282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:30:06.275797   67282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:30:06.276877   67282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:30:06.276944   67282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:30:06.437286   67282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:30:06.438849   67282 out.go:235]   - Booting up control plane ...
	I1004 04:30:06.438952   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:30:06.443688   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:30:06.444596   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:30:06.445267   67282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:30:06.457334   67282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:30:46.456706   67282 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:30:46.456854   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:46.457117   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:51.456986   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:51.457240   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:31:01.457062   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:31:01.457288   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:31:21.456976   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:31:21.457277   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:32:01.456978   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:32:01.457225   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:32:01.457249   67282 kubeadm.go:310] 
	I1004 04:32:01.457312   67282 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:32:01.457374   67282 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:32:01.457383   67282 kubeadm.go:310] 
	I1004 04:32:01.457434   67282 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:32:01.457512   67282 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:32:01.457678   67282 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:32:01.457692   67282 kubeadm.go:310] 
	I1004 04:32:01.457838   67282 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:32:01.457892   67282 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:32:01.457946   67282 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:32:01.457957   67282 kubeadm.go:310] 
	I1004 04:32:01.458102   67282 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:32:01.458217   67282 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:32:01.458233   67282 kubeadm.go:310] 
	I1004 04:32:01.458379   67282 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:32:01.458494   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:32:01.458604   67282 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:32:01.458699   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:32:01.458710   67282 kubeadm.go:310] 
	I1004 04:32:01.459157   67282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:32:01.459272   67282 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:32:01.459386   67282 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1004 04:32:01.459464   67282 kubeadm.go:394] duration metric: took 7m57.553695137s to StartCluster
	I1004 04:32:01.459522   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:32:01.459586   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:32:01.500997   67282 cri.go:89] found id: ""
	I1004 04:32:01.501026   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.501037   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:32:01.501044   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:32:01.501102   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:32:01.537240   67282 cri.go:89] found id: ""
	I1004 04:32:01.537276   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.537288   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:32:01.537295   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:32:01.537349   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:32:01.573959   67282 cri.go:89] found id: ""
	I1004 04:32:01.573995   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.574007   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:32:01.574013   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:32:01.574074   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:32:01.610614   67282 cri.go:89] found id: ""
	I1004 04:32:01.610645   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.610657   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:32:01.610665   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:32:01.610716   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:32:01.645520   67282 cri.go:89] found id: ""
	I1004 04:32:01.645554   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.645567   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:32:01.645574   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:32:01.645640   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:32:01.679787   67282 cri.go:89] found id: ""
	I1004 04:32:01.679814   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.679823   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:32:01.679828   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:32:01.679873   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:32:01.714860   67282 cri.go:89] found id: ""
	I1004 04:32:01.714883   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.714891   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:32:01.714897   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:32:01.714952   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:32:01.761170   67282 cri.go:89] found id: ""
	I1004 04:32:01.761198   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.761208   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:32:01.761220   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:32:01.761232   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:32:01.822966   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:32:01.823006   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:32:01.839482   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:32:01.839510   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:32:01.917863   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:32:01.917887   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:32:01.917901   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:32:02.027216   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:32:02.027247   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1004 04:32:02.069804   67282 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1004 04:32:02.069852   67282 out.go:270] * 
	W1004 04:32:02.069922   67282 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:32:02.069939   67282 out.go:270] * 
	W1004 04:32:02.070740   67282 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 04:32:02.074308   67282 out.go:201] 
	W1004 04:32:02.075387   67282 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:32:02.075427   67282 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1004 04:32:02.075458   67282 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1004 04:32:02.076675   67282 out.go:201] 
	
	
	==> CRI-O <==
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.854341273Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016323854321249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ec0a7da-5f1b-46bb-a479-acc472a1a0ea name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.854891344Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3bdf2e6d-8803-4cd7-bee6-7a81687da466 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.855008621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3bdf2e6d-8803-4cd7-bee6-7a81687da466 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.855064354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3bdf2e6d-8803-4cd7-bee6-7a81687da466 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.889798241Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40f6e124-dc0e-4b57-8222-a2cd9b58f5ad name=/runtime.v1.RuntimeService/Version
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.889884685Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40f6e124-dc0e-4b57-8222-a2cd9b58f5ad name=/runtime.v1.RuntimeService/Version
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.890658213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=332e7cd0-cf1e-4281-acae-0089f1b48f76 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.891097736Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016323891078488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=332e7cd0-cf1e-4281-acae-0089f1b48f76 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.891523478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa3fc16c-5637-4af5-a13a-d75cd7746f8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.891572535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa3fc16c-5637-4af5-a13a-d75cd7746f8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.891639829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fa3fc16c-5637-4af5-a13a-d75cd7746f8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.926706296Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0b1f74e-87ba-4797-a8a4-aeb704a2375c name=/runtime.v1.RuntimeService/Version
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.926785988Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0b1f74e-87ba-4797-a8a4-aeb704a2375c name=/runtime.v1.RuntimeService/Version
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.928093603Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15583a23-fd1d-41c5-a346-fd38f75feddd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.928471492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016323928449749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15583a23-fd1d-41c5-a346-fd38f75feddd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.929050768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e73bf3c-55b8-4f03-b8b0-2e6dd673b01a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.929101484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e73bf3c-55b8-4f03-b8b0-2e6dd673b01a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.929140979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0e73bf3c-55b8-4f03-b8b0-2e6dd673b01a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.960031889Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e56d8d78-4e9e-4cf9-a5be-8b7351b7ed8f name=/runtime.v1.RuntimeService/Version
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.960148047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e56d8d78-4e9e-4cf9-a5be-8b7351b7ed8f name=/runtime.v1.RuntimeService/Version
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.961314084Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97907bef-fcf4-4fe2-9671-7cf6d21ca97d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.961673464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016323961651260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97907bef-fcf4-4fe2-9671-7cf6d21ca97d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.962191207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9592d15f-7fdc-41e6-bf64-b6987df3b3bc name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.962252214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9592d15f-7fdc-41e6-bf64-b6987df3b3bc name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:32:03 old-k8s-version-420062 crio[636]: time="2024-10-04 04:32:03.962285410Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9592d15f-7fdc-41e6-bf64-b6987df3b3bc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 4 04:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057605] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040409] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.074027] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.556132] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.574130] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.887139] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.071312] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072511] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.216496] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.132348] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.289222] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[Oct 4 04:24] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.060637] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.786232] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[ +11.909104] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 4 04:28] systemd-fstab-generator[5073]: Ignoring "noauto" option for root device
	[Oct 4 04:30] systemd-fstab-generator[5352]: Ignoring "noauto" option for root device
	[  +0.068575] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 04:32:04 up 8 min,  0 users,  load average: 0.00, 0.08, 0.05
	Linux old-k8s-version-420062 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000d88c0, 0xc000cd73e0, 0x1, 0x0, 0x0)
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0001f7dc0)
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]: goroutine 156 [select]:
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc00067de50, 0x1, 0x0, 0x0, 0x0, 0x0)
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000cb9bc0, 0x0, 0x0)
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0001f7dc0)
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5527]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Oct 04 04:32:01 old-k8s-version-420062 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 04 04:32:01 old-k8s-version-420062 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 04 04:32:01 old-k8s-version-420062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Oct 04 04:32:01 old-k8s-version-420062 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 04 04:32:01 old-k8s-version-420062 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5573]: I1004 04:32:01.834439    5573 server.go:416] Version: v1.20.0
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5573]: I1004 04:32:01.834665    5573 server.go:837] Client rotation is on, will bootstrap in background
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5573]: I1004 04:32:01.836531    5573 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5573]: W1004 04:32:01.837409    5573 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 04 04:32:01 old-k8s-version-420062 kubelet[5573]: I1004 04:32:01.837994    5573 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-420062 -n old-k8s-version-420062
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-420062 -n old-k8s-version-420062: exit status 2 (223.496465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-420062" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (676.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471: exit status 3 (3.167913551s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:21:14.020147   67411 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	E1004 04:21:14.020167   67411 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-281471 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-281471 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154407727s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-281471 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471: exit status 3 (3.061066197s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 04:21:23.236161   67494 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	E1004 04:21:23.236184   67494 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-281471" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-934812 -n embed-certs-934812
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-04 04:37:52.421810643 +0000 UTC m=+6591.354751194
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-934812 -n embed-certs-934812
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-934812 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-934812 logs -n 25: (2.086051256s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-934812            | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-617497             | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617497                  | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617497 --memory=2200 --alsologtostderr   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-617497 image list                           | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:18 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658545                  | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281471  | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-420062        | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-934812                 | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:19 UTC | 04 Oct 24 04:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-420062             | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281471       | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC | 04 Oct 24 04:28 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 04:21:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 04:21:23.276574   67541 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:21:23.276701   67541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:21:23.276710   67541 out.go:358] Setting ErrFile to fd 2...
	I1004 04:21:23.276715   67541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:21:23.276893   67541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:21:23.277439   67541 out.go:352] Setting JSON to false
	I1004 04:21:23.278387   67541 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7428,"bootTime":1728008255,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:21:23.278482   67541 start.go:139] virtualization: kvm guest
	I1004 04:21:23.280571   67541 out.go:177] * [default-k8s-diff-port-281471] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:21:23.282033   67541 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:21:23.282063   67541 notify.go:220] Checking for updates...
	I1004 04:21:23.284454   67541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:21:23.285843   67541 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:21:23.287026   67541 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:21:23.288328   67541 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:21:23.289544   67541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:21:23.291321   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:21:23.291979   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:21:23.292059   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:21:23.306995   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I1004 04:21:23.307440   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:21:23.308080   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:21:23.308106   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:21:23.308442   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:21:23.308642   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:21:23.308893   67541 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:21:23.309208   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:21:23.309280   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:21:23.323807   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I1004 04:21:23.324281   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:21:23.324777   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:21:23.324797   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:21:23.325085   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:21:23.325248   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:21:23.359916   67541 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 04:21:23.361482   67541 start.go:297] selected driver: kvm2
	I1004 04:21:23.361504   67541 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:21:23.361657   67541 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:21:23.362533   67541 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:21:23.362621   67541 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:21:23.378088   67541 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:21:23.378515   67541 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:21:23.378547   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:21:23.378591   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:21:23.378627   67541 start.go:340] cluster config:
	{Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:21:23.378727   67541 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:21:23.380705   67541 out.go:177] * Starting "default-k8s-diff-port-281471" primary control-plane node in "default-k8s-diff-port-281471" cluster
	I1004 04:21:20.068102   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:23.140106   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:23.381986   67541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:21:23.382036   67541 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 04:21:23.382048   67541 cache.go:56] Caching tarball of preloaded images
	I1004 04:21:23.382125   67541 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:21:23.382135   67541 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 04:21:23.382254   67541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/config.json ...
	I1004 04:21:23.382433   67541 start.go:360] acquireMachinesLock for default-k8s-diff-port-281471: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:21:29.220163   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:32.292105   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:38.372080   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:41.444091   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:47.524103   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:50.596091   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:56.676086   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:59.748055   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:05.828125   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:08.900042   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:14.980094   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:18.052114   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:24.132087   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:27.204139   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:33.284040   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:36.356076   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:42.436190   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:45.508075   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:51.588061   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:54.660042   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:00.740141   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:03.812099   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:09.892076   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:12.964133   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:15.968919   66755 start.go:364] duration metric: took 4m6.72532498s to acquireMachinesLock for "embed-certs-934812"
	I1004 04:23:15.968984   66755 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:15.968992   66755 fix.go:54] fixHost starting: 
	I1004 04:23:15.969309   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:15.969356   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:15.984739   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I1004 04:23:15.985214   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:15.985743   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:23:15.985769   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:15.986104   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:15.986289   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:15.986449   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:23:15.988237   66755 fix.go:112] recreateIfNeeded on embed-certs-934812: state=Stopped err=<nil>
	I1004 04:23:15.988263   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	W1004 04:23:15.988415   66755 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:15.990473   66755 out.go:177] * Restarting existing kvm2 VM for "embed-certs-934812" ...
	I1004 04:23:15.965929   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:15.965974   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:23:15.966321   66293 buildroot.go:166] provisioning hostname "no-preload-658545"
	I1004 04:23:15.966348   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:23:15.966530   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:23:15.968760   66293 machine.go:96] duration metric: took 4m37.423316886s to provisionDockerMachine
	I1004 04:23:15.968806   66293 fix.go:56] duration metric: took 4m37.446149084s for fixHost
	I1004 04:23:15.968814   66293 start.go:83] releasing machines lock for "no-preload-658545", held for 4m37.446179902s
	W1004 04:23:15.968836   66293 start.go:714] error starting host: provision: host is not running
	W1004 04:23:15.968935   66293 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1004 04:23:15.968946   66293 start.go:729] Will try again in 5 seconds ...
	I1004 04:23:15.991914   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Start
	I1004 04:23:15.992106   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring networks are active...
	I1004 04:23:15.992995   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring network default is active
	I1004 04:23:15.993392   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring network mk-embed-certs-934812 is active
	I1004 04:23:15.993728   66755 main.go:141] libmachine: (embed-certs-934812) Getting domain xml...
	I1004 04:23:15.994410   66755 main.go:141] libmachine: (embed-certs-934812) Creating domain...
	I1004 04:23:17.232262   66755 main.go:141] libmachine: (embed-certs-934812) Waiting to get IP...
	I1004 04:23:17.233339   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.233793   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.233879   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.233797   67957 retry.go:31] will retry after 221.075745ms: waiting for machine to come up
	I1004 04:23:17.456413   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.456917   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.456941   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.456869   67957 retry.go:31] will retry after 354.386237ms: waiting for machine to come up
	I1004 04:23:17.812523   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.812949   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.812973   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.812905   67957 retry.go:31] will retry after 338.999517ms: waiting for machine to come up
	I1004 04:23:18.153589   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:18.154029   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:18.154056   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:18.153987   67957 retry.go:31] will retry after 555.533205ms: waiting for machine to come up
	I1004 04:23:18.710680   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:18.711155   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:18.711181   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:18.711104   67957 retry.go:31] will retry after 733.812197ms: waiting for machine to come up
	I1004 04:23:20.970507   66293 start.go:360] acquireMachinesLock for no-preload-658545: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:23:19.447202   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:19.447644   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:19.447671   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:19.447600   67957 retry.go:31] will retry after 575.303848ms: waiting for machine to come up
	I1004 04:23:20.024465   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:20.024788   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:20.024819   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:20.024735   67957 retry.go:31] will retry after 894.593683ms: waiting for machine to come up
	I1004 04:23:20.920880   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:20.921499   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:20.921522   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:20.921480   67957 retry.go:31] will retry after 924.978895ms: waiting for machine to come up
	I1004 04:23:21.848064   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:21.848498   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:21.848619   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:21.848550   67957 retry.go:31] will retry after 1.554806984s: waiting for machine to come up
	I1004 04:23:23.404569   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:23.404936   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:23.404964   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:23.404884   67957 retry.go:31] will retry after 1.700496318s: waiting for machine to come up
	I1004 04:23:25.106988   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:25.107410   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:25.107441   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:25.107351   67957 retry.go:31] will retry after 1.913555474s: waiting for machine to come up
	I1004 04:23:27.022672   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:27.023134   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:27.023161   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:27.023096   67957 retry.go:31] will retry after 3.208946613s: waiting for machine to come up
	I1004 04:23:30.235462   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:30.235910   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:30.235942   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:30.235868   67957 retry.go:31] will retry after 3.125545279s: waiting for machine to come up
	I1004 04:23:33.364563   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.365007   66755 main.go:141] libmachine: (embed-certs-934812) Found IP for machine: 192.168.61.74
	I1004 04:23:33.365031   66755 main.go:141] libmachine: (embed-certs-934812) Reserving static IP address...
	I1004 04:23:33.365047   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has current primary IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.365595   66755 main.go:141] libmachine: (embed-certs-934812) Reserved static IP address: 192.168.61.74
	I1004 04:23:33.365628   66755 main.go:141] libmachine: (embed-certs-934812) Waiting for SSH to be available...
	I1004 04:23:33.365648   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "embed-certs-934812", mac: "52:54:00:25:fb:50", ip: "192.168.61.74"} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.365667   66755 main.go:141] libmachine: (embed-certs-934812) DBG | skip adding static IP to network mk-embed-certs-934812 - found existing host DHCP lease matching {name: "embed-certs-934812", mac: "52:54:00:25:fb:50", ip: "192.168.61.74"}
	I1004 04:23:33.365682   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Getting to WaitForSSH function...
	I1004 04:23:33.367835   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.368155   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.368185   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.368297   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Using SSH client type: external
	I1004 04:23:33.368322   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa (-rw-------)
	I1004 04:23:33.368359   66755 main.go:141] libmachine: (embed-certs-934812) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:23:33.368369   66755 main.go:141] libmachine: (embed-certs-934812) DBG | About to run SSH command:
	I1004 04:23:33.368377   66755 main.go:141] libmachine: (embed-certs-934812) DBG | exit 0
	I1004 04:23:33.496067   66755 main.go:141] libmachine: (embed-certs-934812) DBG | SSH cmd err, output: <nil>: 
	I1004 04:23:33.496559   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetConfigRaw
	I1004 04:23:33.497310   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:33.500858   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.501360   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.501403   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.501750   66755 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/config.json ...
	I1004 04:23:33.502058   66755 machine.go:93] provisionDockerMachine start ...
	I1004 04:23:33.502084   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:33.502303   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.505899   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.506442   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.506475   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.506686   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.506947   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.507165   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.507324   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.507541   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.507744   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.507757   66755 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:23:33.624518   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:23:33.624547   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.624795   66755 buildroot.go:166] provisioning hostname "embed-certs-934812"
	I1004 04:23:33.624826   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.625021   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.627597   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.627916   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.627948   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.628115   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.628312   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.628444   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.628608   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.628785   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.629023   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.629040   66755 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-934812 && echo "embed-certs-934812" | sudo tee /etc/hostname
	I1004 04:23:33.758642   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-934812
	
	I1004 04:23:33.758681   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.761325   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.761654   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.761696   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.761849   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.762034   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.762164   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.762297   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.762426   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.762636   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.762652   66755 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-934812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-934812/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-934812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:23:33.889571   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:33.889601   66755 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:23:33.889642   66755 buildroot.go:174] setting up certificates
	I1004 04:23:33.889654   66755 provision.go:84] configureAuth start
	I1004 04:23:33.889681   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.889992   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:33.892657   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.893063   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.893087   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.893310   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.895770   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.896126   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.896162   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.896328   66755 provision.go:143] copyHostCerts
	I1004 04:23:33.896397   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:23:33.896408   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:23:33.896472   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:23:33.896565   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:23:33.896573   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:23:33.896595   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:23:33.896652   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:23:33.896659   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:23:33.896678   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:23:33.896724   66755 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.embed-certs-934812 san=[127.0.0.1 192.168.61.74 embed-certs-934812 localhost minikube]
	I1004 04:23:33.997867   66755 provision.go:177] copyRemoteCerts
	I1004 04:23:33.997923   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:23:33.997950   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.001050   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.001422   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.001461   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.001733   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.001961   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.002125   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.002246   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.090823   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:23:34.116934   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1004 04:23:34.669084   67282 start.go:364] duration metric: took 2m46.052475725s to acquireMachinesLock for "old-k8s-version-420062"
	I1004 04:23:34.669158   67282 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:34.669168   67282 fix.go:54] fixHost starting: 
	I1004 04:23:34.669584   67282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:34.669640   67282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:34.686790   67282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I1004 04:23:34.687312   67282 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:34.687829   67282 main.go:141] libmachine: Using API Version  1
	I1004 04:23:34.687857   67282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:34.688238   67282 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:34.688415   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:34.688579   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetState
	I1004 04:23:34.690288   67282 fix.go:112] recreateIfNeeded on old-k8s-version-420062: state=Stopped err=<nil>
	I1004 04:23:34.690326   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	W1004 04:23:34.690467   67282 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:34.692283   67282 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-420062" ...
	I1004 04:23:34.143763   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 04:23:34.168897   66755 provision.go:87] duration metric: took 279.227966ms to configureAuth
	I1004 04:23:34.168929   66755 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:23:34.169096   66755 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:23:34.169168   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.171638   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.171952   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.171977   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.172178   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.172349   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.172503   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.172594   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.172717   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:34.172924   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:34.172943   66755 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:23:34.411661   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:23:34.411690   66755 machine.go:96] duration metric: took 909.61315ms to provisionDockerMachine
	I1004 04:23:34.411703   66755 start.go:293] postStartSetup for "embed-certs-934812" (driver="kvm2")
	I1004 04:23:34.411716   66755 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:23:34.411734   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.412070   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:23:34.412099   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.415246   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.415583   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.415643   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.415802   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.415997   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.416170   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.416322   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.507385   66755 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:23:34.511963   66755 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:23:34.511990   66755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:23:34.512064   66755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:23:34.512152   66755 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:23:34.512270   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:23:34.522375   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:34.547860   66755 start.go:296] duration metric: took 136.143527ms for postStartSetup
	I1004 04:23:34.547904   66755 fix.go:56] duration metric: took 18.578910472s for fixHost
	I1004 04:23:34.547931   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.550715   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.551031   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.551067   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.551194   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.551391   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.551568   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.551724   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.551903   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:34.552055   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:34.552064   66755 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:23:34.668944   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015814.641353752
	
	I1004 04:23:34.668966   66755 fix.go:216] guest clock: 1728015814.641353752
	I1004 04:23:34.668974   66755 fix.go:229] Guest: 2024-10-04 04:23:34.641353752 +0000 UTC Remote: 2024-10-04 04:23:34.547909289 +0000 UTC m=+265.449211021 (delta=93.444463ms)
	I1004 04:23:34.668993   66755 fix.go:200] guest clock delta is within tolerance: 93.444463ms
	I1004 04:23:34.668999   66755 start.go:83] releasing machines lock for "embed-certs-934812", held for 18.70003051s
	I1004 04:23:34.669024   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.669299   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:34.672346   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.672757   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.672796   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.672966   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673609   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673816   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673940   66755 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:23:34.673982   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.674020   66755 ssh_runner.go:195] Run: cat /version.json
	I1004 04:23:34.674043   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.676934   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677085   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677379   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.677406   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677449   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.677480   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677560   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.677677   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.677758   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.677811   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.677873   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.677928   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.677979   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.678022   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.761509   66755 ssh_runner.go:195] Run: systemctl --version
	I1004 04:23:34.784487   66755 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:23:34.934037   66755 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:23:34.942569   66755 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:23:34.942642   66755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:23:34.960164   66755 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:23:34.960197   66755 start.go:495] detecting cgroup driver to use...
	I1004 04:23:34.960276   66755 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:23:34.979195   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:23:34.994660   66755 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:23:34.994747   66755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:23:35.011209   66755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:23:35.031746   66755 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:23:35.146164   66755 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:23:35.287092   66755 docker.go:233] disabling docker service ...
	I1004 04:23:35.287167   66755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:23:35.308007   66755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:23:35.323235   66755 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:23:35.473583   66755 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:23:35.610098   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:23:35.624276   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:23:35.643810   66755 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:23:35.643873   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.655804   66755 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:23:35.655875   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.668260   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.679770   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.692649   66755 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:23:35.704364   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.715539   66755 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.739272   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.754538   66755 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:23:35.766476   66755 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:23:35.766566   66755 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:23:35.781677   66755 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:23:35.792640   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:35.910787   66755 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:23:36.015877   66755 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:23:36.015948   66755 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:23:36.021573   66755 start.go:563] Will wait 60s for crictl version
	I1004 04:23:36.021642   66755 ssh_runner.go:195] Run: which crictl
	I1004 04:23:36.025605   66755 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:23:36.064644   66755 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:23:36.064714   66755 ssh_runner.go:195] Run: crio --version
	I1004 04:23:36.094751   66755 ssh_runner.go:195] Run: crio --version
	I1004 04:23:36.127213   66755 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:23:34.693590   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .Start
	I1004 04:23:34.693792   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring networks are active...
	I1004 04:23:34.694582   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network default is active
	I1004 04:23:34.694917   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network mk-old-k8s-version-420062 is active
	I1004 04:23:34.695322   67282 main.go:141] libmachine: (old-k8s-version-420062) Getting domain xml...
	I1004 04:23:34.696052   67282 main.go:141] libmachine: (old-k8s-version-420062) Creating domain...
	I1004 04:23:35.995511   67282 main.go:141] libmachine: (old-k8s-version-420062) Waiting to get IP...
	I1004 04:23:35.996465   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:35.996962   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:35.997031   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:35.996923   68093 retry.go:31] will retry after 296.620059ms: waiting for machine to come up
	I1004 04:23:36.295737   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:36.296226   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:36.296257   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:36.296182   68093 retry.go:31] will retry after 311.736827ms: waiting for machine to come up
	I1004 04:23:36.610158   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:36.610804   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:36.610829   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:36.610759   68093 retry.go:31] will retry after 440.646496ms: waiting for machine to come up
	I1004 04:23:37.053487   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:37.053956   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:37.053981   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:37.053923   68093 retry.go:31] will retry after 550.190101ms: waiting for machine to come up
	I1004 04:23:37.605404   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:37.605775   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:37.605815   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:37.605743   68093 retry.go:31] will retry after 721.648529ms: waiting for machine to come up
	I1004 04:23:38.328819   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:38.329323   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:38.329362   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:38.329281   68093 retry.go:31] will retry after 825.234448ms: waiting for machine to come up
	I1004 04:23:36.128549   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:36.131439   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:36.131827   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:36.131856   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:36.132054   66755 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1004 04:23:36.136650   66755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:36.149563   66755 kubeadm.go:883] updating cluster {Name:embed-certs-934812 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:23:36.149691   66755 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:23:36.149738   66755 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:36.188235   66755 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:23:36.188316   66755 ssh_runner.go:195] Run: which lz4
	I1004 04:23:36.192619   66755 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:23:36.196876   66755 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:23:36.196909   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:23:37.711672   66755 crio.go:462] duration metric: took 1.519102092s to copy over tarball
	I1004 04:23:37.711752   66755 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:23:39.155736   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:39.156199   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:39.156229   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:39.156150   68093 retry.go:31] will retry after 970.793402ms: waiting for machine to come up
	I1004 04:23:40.128963   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:40.129454   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:40.129507   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:40.129419   68093 retry.go:31] will retry after 1.460395601s: waiting for machine to come up
	I1004 04:23:41.592145   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:41.592653   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:41.592677   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:41.592600   68093 retry.go:31] will retry after 1.397092356s: waiting for machine to come up
	I1004 04:23:42.992176   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:42.992670   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:42.992724   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:42.992663   68093 retry.go:31] will retry after 1.560294099s: waiting for machine to come up
	I1004 04:23:39.864408   66755 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.152629063s)
	I1004 04:23:39.864437   66755 crio.go:469] duration metric: took 2.152732931s to extract the tarball
	I1004 04:23:39.864446   66755 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:23:39.902496   66755 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:39.956348   66755 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:23:39.956373   66755 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:23:39.956381   66755 kubeadm.go:934] updating node { 192.168.61.74 8443 v1.31.1 crio true true} ...
	I1004 04:23:39.956509   66755 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-934812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:23:39.956572   66755 ssh_runner.go:195] Run: crio config
	I1004 04:23:40.014396   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:23:40.014423   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:23:40.014436   66755 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:23:40.014470   66755 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.74 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-934812 NodeName:embed-certs-934812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:23:40.014642   66755 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-934812"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:23:40.014728   66755 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:23:40.025328   66755 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:23:40.025441   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:23:40.035733   66755 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1004 04:23:40.057427   66755 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:23:40.078636   66755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1004 04:23:40.100583   66755 ssh_runner.go:195] Run: grep 192.168.61.74	control-plane.minikube.internal$ /etc/hosts
	I1004 04:23:40.104780   66755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:40.118484   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:40.245425   66755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:23:40.268739   66755 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812 for IP: 192.168.61.74
	I1004 04:23:40.268764   66755 certs.go:194] generating shared ca certs ...
	I1004 04:23:40.268792   66755 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:23:40.268962   66755 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:23:40.269022   66755 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:23:40.269035   66755 certs.go:256] generating profile certs ...
	I1004 04:23:40.269145   66755 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/client.key
	I1004 04:23:40.269226   66755 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.key.0181efa9
	I1004 04:23:40.269290   66755 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.key
	I1004 04:23:40.269436   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:23:40.269483   66755 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:23:40.269497   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:23:40.269535   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:23:40.269575   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:23:40.269607   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:23:40.269658   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:40.270269   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:23:40.316579   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:23:40.352928   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:23:40.383124   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:23:40.410211   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1004 04:23:40.442388   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:23:40.473580   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:23:40.501589   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:23:40.527299   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:23:40.551994   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:23:40.576644   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:23:40.601518   66755 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:23:40.620092   66755 ssh_runner.go:195] Run: openssl version
	I1004 04:23:40.626451   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:23:40.637754   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.642413   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.642472   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.648449   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:23:40.659371   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:23:40.670276   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.674793   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.674844   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.680550   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:23:40.691439   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:23:40.702237   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.706876   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.706937   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.712970   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:23:40.724505   66755 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:23:40.729486   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:23:40.735720   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:23:40.741680   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:23:40.747975   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:23:40.754056   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:23:40.760235   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:23:40.766463   66755 kubeadm.go:392] StartCluster: {Name:embed-certs-934812 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:23:40.766576   66755 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:23:40.766635   66755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:23:40.805927   66755 cri.go:89] found id: ""
	I1004 04:23:40.805995   66755 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:23:40.816693   66755 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:23:40.816717   66755 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:23:40.816770   66755 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:23:40.827024   66755 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:23:40.828056   66755 kubeconfig.go:125] found "embed-certs-934812" server: "https://192.168.61.74:8443"
	I1004 04:23:40.830076   66755 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:23:40.840637   66755 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.74
	I1004 04:23:40.840673   66755 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:23:40.840686   66755 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:23:40.840741   66755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:23:40.877659   66755 cri.go:89] found id: ""
	I1004 04:23:40.877737   66755 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:23:40.894712   66755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:23:40.904202   66755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:23:40.904224   66755 kubeadm.go:157] found existing configuration files:
	
	I1004 04:23:40.904290   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:23:40.913941   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:23:40.914003   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:23:40.924730   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:23:40.934706   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:23:40.934784   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:23:40.945008   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:23:40.954864   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:23:40.954949   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:23:40.965357   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:23:40.975380   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:23:40.975459   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:23:40.986157   66755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:23:41.001260   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:41.129150   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:41.839910   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.059079   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.132717   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.204227   66755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:23:42.204389   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:42.704572   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.205099   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.704555   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.720983   66755 api_server.go:72] duration metric: took 1.516755506s to wait for apiserver process to appear ...
	I1004 04:23:43.721020   66755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:23:43.721043   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.578729   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:23:46.578764   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:23:46.578780   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.611578   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:23:46.611609   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:23:46.721894   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.728611   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:46.728649   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:47.221889   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:47.229348   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:47.229382   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:47.721971   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:47.741433   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:47.741460   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:48.222154   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:48.226802   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 200:
	ok
	I1004 04:23:48.233611   66755 api_server.go:141] control plane version: v1.31.1
	I1004 04:23:48.233645   66755 api_server.go:131] duration metric: took 4.512616682s to wait for apiserver health ...
	I1004 04:23:48.233655   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:23:48.233662   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:23:48.235421   66755 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:23:44.555619   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:44.556128   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:44.556154   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:44.556061   68093 retry.go:31] will retry after 2.564674777s: waiting for machine to come up
	I1004 04:23:47.123819   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:47.124235   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:47.124263   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:47.124181   68093 retry.go:31] will retry after 2.408805702s: waiting for machine to come up
	I1004 04:23:48.236675   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:23:48.248304   66755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:23:48.273584   66755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:23:48.288132   66755 system_pods.go:59] 8 kube-system pods found
	I1004 04:23:48.288174   66755 system_pods.go:61] "coredns-7c65d6cfc9-z7pqn" [f206a8bf-5c18-49f2-9fae-a48a38d608a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:23:48.288208   66755 system_pods.go:61] "etcd-embed-certs-934812" [07a8f2db-6d47-469b-b0e4-749d1e106522] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:23:48.288218   66755 system_pods.go:61] "kube-apiserver-embed-certs-934812" [f36bc69a-a04e-40c2-8f78-a983ddbf28aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:23:48.288227   66755 system_pods.go:61] "kube-controller-manager-embed-certs-934812" [06d73118-fa31-4c98-b1e8-099611718b19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:23:48.288232   66755 system_pods.go:61] "kube-proxy-9qpgb" [6d833f16-4b8e-4409-99b6-214babe699c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 04:23:48.288238   66755 system_pods.go:61] "kube-scheduler-embed-certs-934812" [d076a245-49b6-4d8b-949a-2b559cd1d4d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:23:48.288243   66755 system_pods.go:61] "metrics-server-6867b74b74-d5b6b" [f4ec5d83-22a7-49e5-97e9-3519a29484fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:23:48.288250   66755 system_pods.go:61] "storage-provisioner" [2e76a95b-d6e2-4c1d-b954-3da8c2670a4a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 04:23:48.288259   66755 system_pods.go:74] duration metric: took 14.644463ms to wait for pod list to return data ...
	I1004 04:23:48.288265   66755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:23:48.293121   66755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:23:48.293153   66755 node_conditions.go:123] node cpu capacity is 2
	I1004 04:23:48.293166   66755 node_conditions.go:105] duration metric: took 4.895489ms to run NodePressure ...
	I1004 04:23:48.293184   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:48.633398   66755 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:23:48.639243   66755 kubeadm.go:739] kubelet initialised
	I1004 04:23:48.639282   66755 kubeadm.go:740] duration metric: took 5.842777ms waiting for restarted kubelet to initialise ...
	I1004 04:23:48.639293   66755 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:23:48.650460   66755 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:49.535979   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:49.536361   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:49.536388   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:49.536332   68093 retry.go:31] will retry after 4.242056709s: waiting for machine to come up
	I1004 04:23:50.657094   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:52.657717   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:55.089234   67541 start.go:364] duration metric: took 2m31.706739813s to acquireMachinesLock for "default-k8s-diff-port-281471"
	I1004 04:23:55.089300   67541 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:55.089311   67541 fix.go:54] fixHost starting: 
	I1004 04:23:55.089673   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:55.089718   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:55.110154   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I1004 04:23:55.110566   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:55.111001   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:23:55.111025   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:55.111417   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:55.111627   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:23:55.111794   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:23:55.113328   67541 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281471: state=Stopped err=<nil>
	I1004 04:23:55.113356   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	W1004 04:23:55.113537   67541 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:55.115190   67541 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-281471" ...
	I1004 04:23:53.783128   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.783631   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has current primary IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.783669   67282 main.go:141] libmachine: (old-k8s-version-420062) Found IP for machine: 192.168.50.146
	I1004 04:23:53.783684   67282 main.go:141] libmachine: (old-k8s-version-420062) Reserving static IP address...
	I1004 04:23:53.784173   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.784206   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | skip adding static IP to network mk-old-k8s-version-420062 - found existing host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"}
	I1004 04:23:53.784222   67282 main.go:141] libmachine: (old-k8s-version-420062) Reserved static IP address: 192.168.50.146
	I1004 04:23:53.784238   67282 main.go:141] libmachine: (old-k8s-version-420062) Waiting for SSH to be available...
	I1004 04:23:53.784250   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Getting to WaitForSSH function...
	I1004 04:23:53.786551   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.786985   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.787016   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.787207   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH client type: external
	I1004 04:23:53.787244   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa (-rw-------)
	I1004 04:23:53.787285   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:23:53.787301   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | About to run SSH command:
	I1004 04:23:53.787315   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | exit 0
	I1004 04:23:53.916121   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | SSH cmd err, output: <nil>: 
	I1004 04:23:53.916487   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetConfigRaw
	I1004 04:23:53.917200   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:53.919846   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.920295   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.920323   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.920641   67282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/config.json ...
	I1004 04:23:53.920902   67282 machine.go:93] provisionDockerMachine start ...
	I1004 04:23:53.920930   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:53.921137   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:53.923647   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.924000   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.924039   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.924198   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:53.924375   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:53.924508   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:53.924659   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:53.924796   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:53.925024   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:53.925036   67282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:23:54.044565   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:23:54.044595   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.044820   67282 buildroot.go:166] provisioning hostname "old-k8s-version-420062"
	I1004 04:23:54.044837   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.045006   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.047682   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.048032   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.048060   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.048186   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.048376   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.048525   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.048694   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.048853   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.049077   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.049098   67282 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-420062 && echo "old-k8s-version-420062" | sudo tee /etc/hostname
	I1004 04:23:54.183772   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-420062
	
	I1004 04:23:54.183835   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.186969   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.187333   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.187368   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.187754   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.188000   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.188177   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.188334   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.188559   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.188778   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.188803   67282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-420062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-420062/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-420062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:23:54.313827   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:54.313852   67282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:23:54.313896   67282 buildroot.go:174] setting up certificates
	I1004 04:23:54.313913   67282 provision.go:84] configureAuth start
	I1004 04:23:54.313925   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.314208   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:54.317028   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.317378   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.317408   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.317549   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.320292   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.320690   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.320718   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.320874   67282 provision.go:143] copyHostCerts
	I1004 04:23:54.320945   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:23:54.320957   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:23:54.321020   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:23:54.321144   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:23:54.321157   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:23:54.321184   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:23:54.321269   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:23:54.321279   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:23:54.321306   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:23:54.321378   67282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-420062 san=[127.0.0.1 192.168.50.146 localhost minikube old-k8s-version-420062]
	I1004 04:23:54.395370   67282 provision.go:177] copyRemoteCerts
	I1004 04:23:54.395422   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:23:54.395452   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.398647   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.399153   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.399194   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.399392   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.399582   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.399852   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.399991   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:54.491055   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:23:54.523206   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1004 04:23:54.549843   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:23:54.580403   67282 provision.go:87] duration metric: took 266.475364ms to configureAuth
	I1004 04:23:54.580438   67282 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:23:54.580645   67282 config.go:182] Loaded profile config "old-k8s-version-420062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1004 04:23:54.580736   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.583200   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.583489   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.583522   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.583672   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.583871   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.584066   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.584195   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.584402   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.584567   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.584582   67282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:23:54.835402   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:23:54.835436   67282 machine.go:96] duration metric: took 914.509404ms to provisionDockerMachine
	I1004 04:23:54.835451   67282 start.go:293] postStartSetup for "old-k8s-version-420062" (driver="kvm2")
	I1004 04:23:54.835466   67282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:23:54.835491   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:54.835870   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:23:54.835902   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.838257   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.838645   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.838670   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.838810   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.838972   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.839117   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.839247   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:54.927041   67282 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:23:54.931330   67282 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:23:54.931357   67282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:23:54.931424   67282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:23:54.931538   67282 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:23:54.931658   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:23:54.941402   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:54.967433   67282 start.go:296] duration metric: took 131.968424ms for postStartSetup
	I1004 04:23:54.967495   67282 fix.go:56] duration metric: took 20.29830643s for fixHost
	I1004 04:23:54.967523   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.970138   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.970485   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.970502   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.970802   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.971000   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.971164   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.971330   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.971560   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.971739   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.971751   67282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:23:55.089031   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015835.056238818
	
	I1004 04:23:55.089054   67282 fix.go:216] guest clock: 1728015835.056238818
	I1004 04:23:55.089063   67282 fix.go:229] Guest: 2024-10-04 04:23:55.056238818 +0000 UTC Remote: 2024-10-04 04:23:54.967501465 +0000 UTC m=+186.499621032 (delta=88.737353ms)
	I1004 04:23:55.089086   67282 fix.go:200] guest clock delta is within tolerance: 88.737353ms
	I1004 04:23:55.089093   67282 start.go:83] releasing machines lock for "old-k8s-version-420062", held for 20.419961099s
	I1004 04:23:55.089124   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.089472   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:55.092047   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.092519   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.092552   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.092784   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093364   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093566   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093670   67282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:23:55.093715   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:55.093808   67282 ssh_runner.go:195] Run: cat /version.json
	I1004 04:23:55.093834   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:55.096451   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.096829   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.096862   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.096881   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.097173   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:55.097364   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:55.097446   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.097474   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.097548   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:55.097685   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:55.097816   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:55.097823   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:55.097953   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:55.098106   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:55.207195   67282 ssh_runner.go:195] Run: systemctl --version
	I1004 04:23:55.214080   67282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:23:55.369882   67282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:23:55.376111   67282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:23:55.376171   67282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:23:55.393916   67282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:23:55.393945   67282 start.go:495] detecting cgroup driver to use...
	I1004 04:23:55.394015   67282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:23:55.411330   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:23:55.427665   67282 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:23:55.427734   67282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:23:55.445180   67282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:23:55.465131   67282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:23:55.596260   67282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:23:55.781647   67282 docker.go:233] disabling docker service ...
	I1004 04:23:55.781711   67282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:23:55.801252   67282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:23:55.817688   67282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:23:55.952563   67282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:23:56.081096   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:23:56.096194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:23:56.116859   67282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1004 04:23:56.116924   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.129060   67282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:23:56.129133   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.141246   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.158759   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.172580   67282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:23:56.192027   67282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:23:56.206698   67282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:23:56.206757   67282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:23:56.223074   67282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:23:56.241061   67282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:56.365616   67282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:23:56.474445   67282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:23:56.474519   67282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:23:56.480077   67282 start.go:563] Will wait 60s for crictl version
	I1004 04:23:56.480133   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:23:56.485207   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:23:56.537710   67282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:23:56.537802   67282 ssh_runner.go:195] Run: crio --version
	I1004 04:23:56.571679   67282 ssh_runner.go:195] Run: crio --version
	I1004 04:23:56.605639   67282 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1004 04:23:55.116525   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Start
	I1004 04:23:55.116723   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring networks are active...
	I1004 04:23:55.117665   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring network default is active
	I1004 04:23:55.118079   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring network mk-default-k8s-diff-port-281471 is active
	I1004 04:23:55.118565   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Getting domain xml...
	I1004 04:23:55.119417   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Creating domain...
	I1004 04:23:56.429715   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting to get IP...
	I1004 04:23:56.430752   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.431261   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.431353   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.431245   68239 retry.go:31] will retry after 200.843618ms: waiting for machine to come up
	I1004 04:23:56.633542   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.633974   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.634003   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.633923   68239 retry.go:31] will retry after 291.906374ms: waiting for machine to come up
	I1004 04:23:56.927325   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.927847   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.927880   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.927813   68239 retry.go:31] will retry after 374.509137ms: waiting for machine to come up
	I1004 04:23:57.304251   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.304713   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.304738   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:57.304671   68239 retry.go:31] will retry after 583.046975ms: waiting for machine to come up
	I1004 04:23:57.889410   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.889837   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.889868   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:57.889795   68239 retry.go:31] will retry after 549.483036ms: waiting for machine to come up
	I1004 04:23:56.606945   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:56.610421   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:56.610952   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:56.610976   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:56.611373   67282 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1004 04:23:56.615872   67282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:56.629783   67282 kubeadm.go:883] updating cluster {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:23:56.629932   67282 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 04:23:56.629983   67282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:56.690260   67282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:23:56.690343   67282 ssh_runner.go:195] Run: which lz4
	I1004 04:23:56.695808   67282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:23:56.701593   67282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:23:56.701623   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1004 04:23:54.156612   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"True"
	I1004 04:23:54.156637   66755 pod_ready.go:82] duration metric: took 5.506141622s for pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:54.156646   66755 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:56.164534   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:58.166994   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:58.440643   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:58.441080   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:58.441109   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:58.441034   68239 retry.go:31] will retry after 585.437747ms: waiting for machine to come up
	I1004 04:23:59.027951   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.028414   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.028441   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:59.028369   68239 retry.go:31] will retry after 773.32668ms: waiting for machine to come up
	I1004 04:23:59.803329   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.803752   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.803793   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:59.803722   68239 retry.go:31] will retry after 936.396482ms: waiting for machine to come up
	I1004 04:24:00.741805   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:00.742328   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:00.742372   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:00.742262   68239 retry.go:31] will retry after 1.294836266s: waiting for machine to come up
	I1004 04:24:02.038222   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:02.038750   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:02.038785   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:02.038699   68239 retry.go:31] will retry after 2.282660025s: waiting for machine to come up
	I1004 04:23:58.525796   67282 crio.go:462] duration metric: took 1.830039762s to copy over tarball
	I1004 04:23:58.525868   67282 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:24:01.514552   67282 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.98865618s)
	I1004 04:24:01.514585   67282 crio.go:469] duration metric: took 2.988759159s to extract the tarball
	I1004 04:24:01.514595   67282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:24:01.562130   67282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:01.598856   67282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:24:01.598882   67282 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:24:01.598960   67282 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:01.599035   67282 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.599047   67282 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.599048   67282 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1004 04:24:01.599020   67282 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.599025   67282 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.598967   67282 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.598967   67282 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.600760   67282 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.600772   67282 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1004 04:24:01.600767   67282 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:01.600791   67282 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.600802   67282 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.600804   67282 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.600807   67282 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.600840   67282 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.837527   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.877366   67282 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1004 04:24:01.877413   67282 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.877464   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:01.882328   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.914693   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.934055   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.941737   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.943929   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.944540   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.948337   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.970977   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.995537   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1004 04:24:02.127073   67282 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1004 04:24:02.127097   67282 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1004 04:24:02.127121   67282 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.127121   67282 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.127156   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.127159   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128471   67282 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1004 04:24:02.128532   67282 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.128535   67282 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1004 04:24:02.128560   67282 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.128571   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128595   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128598   67282 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1004 04:24:02.128627   67282 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.128669   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128730   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1004 04:24:02.128761   67282 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1004 04:24:02.128783   67282 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1004 04:24:02.128815   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.133675   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.133724   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.141911   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.141950   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.141989   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.142044   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.263733   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.263744   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.263798   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.265990   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.297523   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.297566   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.379282   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.379318   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.379331   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.417271   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.454521   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.454559   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.496644   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1004 04:24:02.533632   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1004 04:24:02.533690   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1004 04:24:02.533750   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1004 04:24:02.568138   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1004 04:24:02.568153   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1004 04:24:02.911933   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:03.055844   67282 cache_images.go:92] duration metric: took 1.456943316s to LoadCachedImages
	W1004 04:24:03.055959   67282 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1004 04:24:03.055976   67282 kubeadm.go:934] updating node { 192.168.50.146 8443 v1.20.0 crio true true} ...
	I1004 04:24:03.056087   67282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-420062 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:03.056162   67282 ssh_runner.go:195] Run: crio config
	I1004 04:24:03.103752   67282 cni.go:84] Creating CNI manager for ""
	I1004 04:24:03.103792   67282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:03.103805   67282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:03.103826   67282 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.146 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-420062 NodeName:old-k8s-version-420062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1004 04:24:03.103952   67282 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-420062"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:03.104008   67282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1004 04:24:03.114316   67282 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:03.114372   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:03.124059   67282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1004 04:24:03.143310   67282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:03.161143   67282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1004 04:24:03.178444   67282 ssh_runner.go:195] Run: grep 192.168.50.146	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:03.182235   67282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:03.195103   67282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:03.317820   67282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:03.334820   67282 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062 for IP: 192.168.50.146
	I1004 04:24:03.334840   67282 certs.go:194] generating shared ca certs ...
	I1004 04:24:03.334855   67282 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:03.335008   67282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:03.335049   67282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:03.335059   67282 certs.go:256] generating profile certs ...
	I1004 04:24:03.335156   67282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.key
	I1004 04:24:03.335212   67282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key.c1f9ed6b
	I1004 04:24:03.335260   67282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key
	I1004 04:24:03.335368   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:03.335394   67282 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:03.335401   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:03.335426   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:03.335451   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:03.335476   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:03.335518   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:03.336260   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:03.373985   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:03.408150   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:03.444219   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:03.493160   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1004 04:24:00.665171   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:02.815874   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:04.022715   66755 pod_ready.go:93] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.022744   66755 pod_ready.go:82] duration metric: took 9.866089641s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.022756   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.028094   66755 pod_ready.go:93] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.028115   66755 pod_ready.go:82] duration metric: took 5.350911ms for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.028123   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.033106   66755 pod_ready.go:93] pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.033124   66755 pod_ready.go:82] duration metric: took 4.995208ms for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.033132   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9qpgb" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.037388   66755 pod_ready.go:93] pod "kube-proxy-9qpgb" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.037409   66755 pod_ready.go:82] duration metric: took 4.270278ms for pod "kube-proxy-9qpgb" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.037420   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.042717   66755 pod_ready.go:93] pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.042737   66755 pod_ready.go:82] duration metric: took 5.30887ms for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.042747   66755 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.324259   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:04.324749   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:04.324811   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:04.324726   68239 retry.go:31] will retry after 2.070089599s: waiting for machine to come up
	I1004 04:24:06.396547   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:06.396991   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:06.397015   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:06.396944   68239 retry.go:31] will retry after 3.403718824s: waiting for machine to come up
	I1004 04:24:03.533084   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:24:03.565405   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:03.613938   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:24:03.642711   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:03.674784   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:03.706968   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:03.731329   67282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:03.749003   67282 ssh_runner.go:195] Run: openssl version
	I1004 04:24:03.755219   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:03.766499   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.771322   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.771413   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.778185   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:03.790581   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:03.802556   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.807312   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.807373   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.813595   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:03.825043   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:03.835389   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.840004   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.840051   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.847540   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:03.862303   67282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:03.868029   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:03.874811   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:03.880797   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:03.886622   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:03.892273   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:03.898129   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:03.905775   67282 kubeadm.go:392] StartCluster: {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:03.905852   67282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:03.905890   67282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:03.954627   67282 cri.go:89] found id: ""
	I1004 04:24:03.954702   67282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:03.965146   67282 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:03.965170   67282 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:03.965236   67282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:03.975404   67282 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:03.976362   67282 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-420062" does not appear in /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:24:03.976990   67282 kubeconfig.go:62] /home/jenkins/minikube-integration/19546-9647/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-420062" cluster setting kubeconfig missing "old-k8s-version-420062" context setting]
	I1004 04:24:03.977906   67282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:03.979485   67282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:03.989487   67282 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.146
	I1004 04:24:03.989517   67282 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:03.989529   67282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:03.989577   67282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:04.031536   67282 cri.go:89] found id: ""
	I1004 04:24:04.031607   67282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:04.048652   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:04.057813   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:04.057830   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:04.057867   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:24:04.066213   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:04.066252   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:04.074904   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:24:04.083485   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:04.083522   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:04.092314   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:24:04.100528   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:04.100572   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:04.109232   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:24:04.118051   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:04.118091   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:04.127430   67282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:04.137949   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:04.272627   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:04.940435   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.181288   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.268873   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.373549   67282 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:05.373653   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:05.874543   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.374154   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.874343   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:07.374744   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:07.874734   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:08.374255   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.050700   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:08.548473   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:09.802504   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:09.802912   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:09.802937   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:09.802870   68239 retry.go:31] will retry after 3.430575602s: waiting for machine to come up
	I1004 04:24:13.236792   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.237230   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Found IP for machine: 192.168.39.201
	I1004 04:24:13.237251   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Reserving static IP address...
	I1004 04:24:13.237268   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has current primary IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.237712   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-281471", mac: "52:54:00:cd:36:92", ip: "192.168.39.201"} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.237745   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Reserved static IP address: 192.168.39.201
	I1004 04:24:13.237765   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | skip adding static IP to network mk-default-k8s-diff-port-281471 - found existing host DHCP lease matching {name: "default-k8s-diff-port-281471", mac: "52:54:00:cd:36:92", ip: "192.168.39.201"}
	I1004 04:24:13.237786   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Getting to WaitForSSH function...
	I1004 04:24:13.237805   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for SSH to be available...
	I1004 04:24:13.240068   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.240354   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.240384   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.240514   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Using SSH client type: external
	I1004 04:24:13.240540   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa (-rw-------)
	I1004 04:24:13.240577   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:24:13.240594   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | About to run SSH command:
	I1004 04:24:13.240608   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | exit 0
	I1004 04:24:08.874627   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:09.374627   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:09.874278   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.374675   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.873949   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:11.373966   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:11.873775   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:12.373874   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:12.874010   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:13.374575   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.550171   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:13.049596   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:14.741098   66293 start.go:364] duration metric: took 53.770546651s to acquireMachinesLock for "no-preload-658545"
	I1004 04:24:14.741156   66293 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:24:14.741164   66293 fix.go:54] fixHost starting: 
	I1004 04:24:14.741565   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:14.741595   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:14.758364   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37947
	I1004 04:24:14.758823   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:14.759356   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:24:14.759383   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:14.759700   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:14.759895   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:14.760077   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:24:14.761849   66293 fix.go:112] recreateIfNeeded on no-preload-658545: state=Stopped err=<nil>
	I1004 04:24:14.761873   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	W1004 04:24:14.762037   66293 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:24:14.764123   66293 out.go:177] * Restarting existing kvm2 VM for "no-preload-658545" ...
	I1004 04:24:13.371830   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | SSH cmd err, output: <nil>: 
	I1004 04:24:13.372219   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetConfigRaw
	I1004 04:24:13.372817   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:13.375676   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.376080   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.376116   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.376393   67541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/config.json ...
	I1004 04:24:13.376616   67541 machine.go:93] provisionDockerMachine start ...
	I1004 04:24:13.376638   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:13.376845   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.379413   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.379847   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.379908   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.380015   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.380204   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.380360   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.380493   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.380657   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.380913   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.380988   67541 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:24:13.492488   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:24:13.492528   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.492749   67541 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281471"
	I1004 04:24:13.492768   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.492928   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.495691   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.496003   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.496031   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.496160   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.496368   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.496530   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.496651   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.496785   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.497017   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.497034   67541 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-281471 && echo "default-k8s-diff-port-281471" | sudo tee /etc/hostname
	I1004 04:24:13.627336   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-281471
	
	I1004 04:24:13.627364   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.630757   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.631162   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.631199   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.631486   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.631701   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.631874   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.632018   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.632216   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.632431   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.632457   67541 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-281471' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-281471/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-281471' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:24:13.758386   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:24:13.758413   67541 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:24:13.758462   67541 buildroot.go:174] setting up certificates
	I1004 04:24:13.758472   67541 provision.go:84] configureAuth start
	I1004 04:24:13.758484   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.758740   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:13.761590   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.761899   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.761939   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.762068   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.764293   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.764644   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.764672   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.764811   67541 provision.go:143] copyHostCerts
	I1004 04:24:13.764869   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:24:13.764880   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:24:13.764936   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:24:13.765046   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:24:13.765055   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:24:13.765075   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:24:13.765127   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:24:13.765135   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:24:13.765160   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:24:13.765235   67541 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-281471 san=[127.0.0.1 192.168.39.201 default-k8s-diff-port-281471 localhost minikube]
	I1004 04:24:14.075640   67541 provision.go:177] copyRemoteCerts
	I1004 04:24:14.075698   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:24:14.075722   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.078293   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.078658   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.078689   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.078827   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.079048   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.079213   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.079348   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.167232   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:24:14.193065   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1004 04:24:14.218112   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:24:14.243281   67541 provision.go:87] duration metric: took 484.783764ms to configureAuth
	I1004 04:24:14.243310   67541 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:24:14.243506   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:14.243593   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.246497   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.246837   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.246885   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.247019   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.247211   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.247384   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.247551   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.247719   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:14.247909   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:14.247923   67541 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:24:14.487651   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:24:14.487675   67541 machine.go:96] duration metric: took 1.11104473s to provisionDockerMachine
	I1004 04:24:14.487686   67541 start.go:293] postStartSetup for "default-k8s-diff-port-281471" (driver="kvm2")
	I1004 04:24:14.487696   67541 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:24:14.487733   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.488084   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:24:14.488114   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.490844   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.491198   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.491229   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.491372   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.491562   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.491700   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.491815   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.579398   67541 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:24:14.584068   67541 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:24:14.584098   67541 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:24:14.584179   67541 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:24:14.584274   67541 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:24:14.584379   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:24:14.594853   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:14.621833   67541 start.go:296] duration metric: took 134.135256ms for postStartSetup
	I1004 04:24:14.621874   67541 fix.go:56] duration metric: took 19.532563115s for fixHost
	I1004 04:24:14.621895   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.625077   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.625418   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.625443   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.625678   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.625900   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.626059   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.626205   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.626373   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:14.626589   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:14.626603   67541 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:24:14.740932   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015854.697826512
	
	I1004 04:24:14.740950   67541 fix.go:216] guest clock: 1728015854.697826512
	I1004 04:24:14.740957   67541 fix.go:229] Guest: 2024-10-04 04:24:14.697826512 +0000 UTC Remote: 2024-10-04 04:24:14.621877739 +0000 UTC m=+171.379203860 (delta=75.948773ms)
	I1004 04:24:14.741000   67541 fix.go:200] guest clock delta is within tolerance: 75.948773ms
	I1004 04:24:14.741007   67541 start.go:83] releasing machines lock for "default-k8s-diff-port-281471", held for 19.651737082s
	I1004 04:24:14.741031   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.741291   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:14.744142   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.744498   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.744518   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.744720   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745368   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745559   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745665   67541 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:24:14.745706   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.745802   67541 ssh_runner.go:195] Run: cat /version.json
	I1004 04:24:14.745843   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.748443   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748779   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.748813   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748838   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748927   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.749064   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.749245   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.749267   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.749283   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.749441   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.749481   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.749589   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.749725   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.749856   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.833632   67541 ssh_runner.go:195] Run: systemctl --version
	I1004 04:24:14.863812   67541 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:24:15.016823   67541 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:24:15.023613   67541 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:24:15.023696   67541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:24:15.042546   67541 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:24:15.042576   67541 start.go:495] detecting cgroup driver to use...
	I1004 04:24:15.042645   67541 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:24:15.060267   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:24:15.076088   67541 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:24:15.076155   67541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:24:15.091741   67541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:24:15.107153   67541 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:24:15.230591   67541 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:24:15.381704   67541 docker.go:233] disabling docker service ...
	I1004 04:24:15.381776   67541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:24:15.397616   67541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:24:15.412350   67541 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:24:15.569525   67541 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:24:15.690120   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:24:15.705348   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:24:15.728253   67541 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:24:15.728334   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.739875   67541 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:24:15.739951   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.751997   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.765898   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.777917   67541 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:24:15.791235   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.802390   67541 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.825385   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.837278   67541 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:24:15.848791   67541 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:24:15.848864   67541 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:24:15.870774   67541 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:24:15.883544   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:15.997406   67541 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:24:16.095391   67541 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:24:16.095508   67541 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:24:16.102427   67541 start.go:563] Will wait 60s for crictl version
	I1004 04:24:16.102510   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:24:16.106958   67541 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:24:16.150721   67541 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:24:16.150824   67541 ssh_runner.go:195] Run: crio --version
	I1004 04:24:16.181714   67541 ssh_runner.go:195] Run: crio --version
	I1004 04:24:16.214202   67541 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:24:16.215583   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:16.218418   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:16.218800   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:16.218831   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:16.219002   67541 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 04:24:16.223382   67541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:16.236443   67541 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:24:16.236565   67541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:24:16.236652   67541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:16.279095   67541 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:24:16.279158   67541 ssh_runner.go:195] Run: which lz4
	I1004 04:24:16.283684   67541 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:24:16.288436   67541 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:24:16.288472   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:24:17.853549   67541 crio.go:462] duration metric: took 1.569889689s to copy over tarball
	I1004 04:24:17.853631   67541 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:24:14.765651   66293 main.go:141] libmachine: (no-preload-658545) Calling .Start
	I1004 04:24:14.765886   66293 main.go:141] libmachine: (no-preload-658545) Ensuring networks are active...
	I1004 04:24:14.766761   66293 main.go:141] libmachine: (no-preload-658545) Ensuring network default is active
	I1004 04:24:14.767179   66293 main.go:141] libmachine: (no-preload-658545) Ensuring network mk-no-preload-658545 is active
	I1004 04:24:14.767706   66293 main.go:141] libmachine: (no-preload-658545) Getting domain xml...
	I1004 04:24:14.768478   66293 main.go:141] libmachine: (no-preload-658545) Creating domain...
	I1004 04:24:16.087556   66293 main.go:141] libmachine: (no-preload-658545) Waiting to get IP...
	I1004 04:24:16.088628   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.089032   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.089093   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.089008   68422 retry.go:31] will retry after 276.442313ms: waiting for machine to come up
	I1004 04:24:16.367448   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.367923   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.367953   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.367894   68422 retry.go:31] will retry after 291.504157ms: waiting for machine to come up
	I1004 04:24:16.661396   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.661958   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.662009   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.661932   68422 retry.go:31] will retry after 378.34293ms: waiting for machine to come up
	I1004 04:24:17.041431   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:17.041942   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:17.041970   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:17.041916   68422 retry.go:31] will retry after 553.613866ms: waiting for machine to come up
	I1004 04:24:17.596745   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:17.597294   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:17.597327   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:17.597259   68422 retry.go:31] will retry after 611.098402ms: waiting for machine to come up
	I1004 04:24:18.210083   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:18.210569   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:18.210592   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:18.210530   68422 retry.go:31] will retry after 691.8822ms: waiting for machine to come up
	I1004 04:24:13.873857   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:14.374241   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:14.873863   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.374063   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.873950   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:16.373819   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:16.874290   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:17.374357   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:17.874163   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:18.374160   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.049926   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:17.051060   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:20.132987   67541 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.279324141s)
	I1004 04:24:20.133023   67541 crio.go:469] duration metric: took 2.279442603s to extract the tarball
	I1004 04:24:20.133033   67541 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:24:20.171805   67541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:20.217431   67541 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:24:20.217458   67541 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:24:20.217468   67541 kubeadm.go:934] updating node { 192.168.39.201 8444 v1.31.1 crio true true} ...
	I1004 04:24:20.217586   67541 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-281471 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:20.217687   67541 ssh_runner.go:195] Run: crio config
	I1004 04:24:20.269529   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:24:20.269559   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:20.269569   67541 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:20.269604   67541 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.201 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-281471 NodeName:default-k8s-diff-port-281471 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:24:20.269822   67541 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.201
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-281471"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:20.269913   67541 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:24:20.281286   67541 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:20.281368   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:20.292186   67541 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1004 04:24:20.310972   67541 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:20.329420   67541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1004 04:24:20.348358   67541 ssh_runner.go:195] Run: grep 192.168.39.201	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:20.352641   67541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:20.366317   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:20.499648   67541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:20.518930   67541 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471 for IP: 192.168.39.201
	I1004 04:24:20.518954   67541 certs.go:194] generating shared ca certs ...
	I1004 04:24:20.518971   67541 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:20.519121   67541 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:20.519167   67541 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:20.519177   67541 certs.go:256] generating profile certs ...
	I1004 04:24:20.519279   67541 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/client.key
	I1004 04:24:20.519347   67541 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.key.6cd63ef9
	I1004 04:24:20.519381   67541 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.key
	I1004 04:24:20.519492   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:20.519527   67541 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:20.519539   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:20.519570   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:20.519614   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:20.519643   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:20.519710   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:20.520418   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:20.566110   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:20.613646   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:20.648416   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:20.678840   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1004 04:24:20.722021   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:24:20.749381   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:20.776777   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 04:24:20.803998   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:20.833182   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:20.859600   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:20.887732   67541 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:20.910566   67541 ssh_runner.go:195] Run: openssl version
	I1004 04:24:20.917151   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:20.930475   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.935819   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.935895   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.942607   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:20.954950   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:20.967348   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.972468   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.972543   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.979061   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:20.992010   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:21.008370   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.015101   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.015161   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.023491   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:21.035766   67541 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:21.041416   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:21.048405   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:21.055468   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:21.062228   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:21.068967   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:21.075984   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:21.086088   67541 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:21.086196   67541 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:21.086253   67541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:21.131997   67541 cri.go:89] found id: ""
	I1004 04:24:21.132061   67541 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:21.145219   67541 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:21.145237   67541 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:21.145289   67541 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:21.157041   67541 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:21.158724   67541 kubeconfig.go:125] found "default-k8s-diff-port-281471" server: "https://192.168.39.201:8444"
	I1004 04:24:21.162295   67541 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:21.173771   67541 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.201
	I1004 04:24:21.173806   67541 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:21.173820   67541 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:21.173891   67541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:21.215149   67541 cri.go:89] found id: ""
	I1004 04:24:21.215216   67541 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:21.234432   67541 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:21.245688   67541 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:21.245707   67541 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:21.245758   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1004 04:24:21.256101   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:21.256168   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:21.267319   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1004 04:24:21.279995   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:21.280050   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:21.292588   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1004 04:24:21.304478   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:21.304545   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:21.317012   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1004 04:24:21.328769   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:21.328853   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:21.341597   67541 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:21.353901   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:21.483705   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.340208   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.582628   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.662202   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.773206   67541 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:22.773327   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.274151   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:18.903981   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:18.904373   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:18.904398   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:18.904331   68422 retry.go:31] will retry after 1.022635653s: waiting for machine to come up
	I1004 04:24:19.929163   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:19.929707   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:19.929749   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:19.929656   68422 retry.go:31] will retry after 939.130061ms: waiting for machine to come up
	I1004 04:24:20.870067   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:20.870578   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:20.870606   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:20.870521   68422 retry.go:31] will retry after 1.673919202s: waiting for machine to come up
	I1004 04:24:22.546229   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:22.546621   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:22.546650   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:22.546569   68422 retry.go:31] will retry after 1.962556159s: waiting for machine to come up
	I1004 04:24:18.874214   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.374670   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.874355   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:20.374335   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:20.874299   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:21.374492   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:21.874293   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:22.373890   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:22.874622   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.374639   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.552128   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:22.050844   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:24.051071   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:23.774477   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.807536   67541 api_server.go:72] duration metric: took 1.034328656s to wait for apiserver process to appear ...
	I1004 04:24:23.807569   67541 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:24:23.807593   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.646266   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:26.646299   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:26.646319   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.696828   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:26.696856   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:26.808107   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.819887   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:26.819947   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:27.308535   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:27.317320   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:27.317372   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:27.807868   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:27.817762   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:27.817805   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:28.307660   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:28.313515   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 200:
	ok
	I1004 04:24:28.320539   67541 api_server.go:141] control plane version: v1.31.1
	I1004 04:24:28.320568   67541 api_server.go:131] duration metric: took 4.512991081s to wait for apiserver health ...
	I1004 04:24:28.320578   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:24:28.320586   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:28.322138   67541 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:24:24.511356   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:24.511886   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:24.511917   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:24.511843   68422 retry.go:31] will retry after 2.5950382s: waiting for machine to come up
	I1004 04:24:27.109018   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:27.109474   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:27.109503   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:27.109451   68422 retry.go:31] will retry after 2.984182925s: waiting for machine to come up
	I1004 04:24:23.873822   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:24.373911   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:24.874756   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:25.374035   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:25.873874   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.374503   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.874371   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:27.374335   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:27.873941   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:28.373861   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.550974   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:28.552007   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:28.323513   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:24:28.336556   67541 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:24:28.358371   67541 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:24:28.373163   67541 system_pods.go:59] 8 kube-system pods found
	I1004 04:24:28.373204   67541 system_pods.go:61] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:24:28.373217   67541 system_pods.go:61] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:24:28.373228   67541 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:24:28.373239   67541 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:24:28.373246   67541 system_pods.go:61] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:24:28.373256   67541 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:24:28.373267   67541 system_pods.go:61] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:24:28.373273   67541 system_pods.go:61] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:24:28.373283   67541 system_pods.go:74] duration metric: took 14.891267ms to wait for pod list to return data ...
	I1004 04:24:28.373294   67541 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:24:28.378226   67541 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:24:28.378269   67541 node_conditions.go:123] node cpu capacity is 2
	I1004 04:24:28.378285   67541 node_conditions.go:105] duration metric: took 4.985167ms to run NodePressure ...
	I1004 04:24:28.378309   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:28.649369   67541 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:24:28.654563   67541 kubeadm.go:739] kubelet initialised
	I1004 04:24:28.654584   67541 kubeadm.go:740] duration metric: took 5.188927ms waiting for restarted kubelet to initialise ...
	I1004 04:24:28.654591   67541 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:28.662152   67541 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.668248   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.668278   67541 pod_ready.go:82] duration metric: took 6.099746ms for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.668287   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.668294   67541 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.675790   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.675811   67541 pod_ready.go:82] duration metric: took 7.509617ms for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.675823   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.675830   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.683763   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.683811   67541 pod_ready.go:82] duration metric: took 7.972006ms for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.683830   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.683839   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.761974   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.762006   67541 pod_ready.go:82] duration metric: took 78.154275ms for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.762021   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.762030   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.162590   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-proxy-4nnld" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.162623   67541 pod_ready.go:82] duration metric: took 400.583388ms for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.162634   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-proxy-4nnld" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.162643   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.562557   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.562584   67541 pod_ready.go:82] duration metric: took 399.929497ms for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.562595   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.562602   67541 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.963502   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.963528   67541 pod_ready.go:82] duration metric: took 400.919452ms for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.963539   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.963547   67541 pod_ready.go:39] duration metric: took 1.308947485s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:29.963561   67541 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:24:29.976241   67541 ops.go:34] apiserver oom_adj: -16
	I1004 04:24:29.976268   67541 kubeadm.go:597] duration metric: took 8.831025549s to restartPrimaryControlPlane
	I1004 04:24:29.976278   67541 kubeadm.go:394] duration metric: took 8.890203906s to StartCluster
	I1004 04:24:29.976295   67541 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:29.976372   67541 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:24:29.977898   67541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:29.978168   67541 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:24:29.978222   67541 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:24:29.978306   67541 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978330   67541 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-281471"
	W1004 04:24:29.978341   67541 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:24:29.978329   67541 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978353   67541 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978369   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:29.978367   67541 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-281471"
	I1004 04:24:29.978377   67541 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-281471"
	W1004 04:24:29.978387   67541 addons.go:243] addon metrics-server should already be in state true
	I1004 04:24:29.978413   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:29.978464   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:29.978731   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978783   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.978818   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978871   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.978839   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978970   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.979903   67541 out.go:177] * Verifying Kubernetes components...
	I1004 04:24:29.981432   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:29.994332   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42631
	I1004 04:24:29.994917   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:29.995488   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:29.995503   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:29.995865   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:29.996675   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:29.999180   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I1004 04:24:29.999220   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I1004 04:24:29.999564   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:29.999651   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.000157   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.000182   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.000262   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.000281   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.000379   67541 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-281471"
	W1004 04:24:30.000398   67541 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:24:30.000429   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:30.000613   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.000646   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.000790   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.000812   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.001163   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.001215   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.001259   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.001307   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.016576   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I1004 04:24:30.016650   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41997
	I1004 04:24:30.016796   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44507
	I1004 04:24:30.016993   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017079   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017138   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017536   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017557   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017548   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017584   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017537   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017621   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017929   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.017931   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.017970   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.018100   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.018152   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.018559   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.018600   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.020021   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.020637   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.022016   67541 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:30.022018   67541 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:24:30.023395   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:24:30.023417   67541 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:24:30.023444   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.023489   67541 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:24:30.023506   67541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:24:30.023528   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.027678   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028005   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028129   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.028180   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028538   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.028552   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.028560   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028724   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.028750   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.028881   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.028911   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.029013   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.029055   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.029124   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.037309   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46395
	I1004 04:24:30.037846   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.038328   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.038355   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.038683   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.038850   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.040366   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.040572   67541 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:24:30.040586   67541 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:24:30.040602   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.043618   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.044070   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.044092   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.044232   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.044413   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.044541   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.044687   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.194435   67541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:30.223577   67541 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-281471" to be "Ready" ...
	I1004 04:24:30.277458   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:24:30.316201   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:24:30.316227   67541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:24:30.333635   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:24:30.346511   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:24:30.346549   67541 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:24:30.405197   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:24:30.405219   67541 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:24:30.465174   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:24:31.307064   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307137   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307430   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.307442   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.307469   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.307546   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307574   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307691   67541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030198983s)
	I1004 04:24:31.307733   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307747   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307789   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.307811   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.309264   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.309275   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.309281   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.309291   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.309299   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.309538   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.309568   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.309583   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.315635   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.315653   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.315917   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.315933   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.411630   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.411658   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.411934   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.411951   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.411965   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.411983   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.411997   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.412221   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.412261   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.412274   67541 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-281471"
	I1004 04:24:31.412283   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.414267   67541 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1004 04:24:31.415607   67541 addons.go:510] duration metric: took 1.43738386s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1004 04:24:32.227563   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:30.095611   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:30.096032   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:30.096061   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:30.095981   68422 retry.go:31] will retry after 2.833386023s: waiting for machine to come up
	I1004 04:24:32.933027   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.933509   66293 main.go:141] libmachine: (no-preload-658545) Found IP for machine: 192.168.72.54
	I1004 04:24:32.933538   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has current primary IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.933544   66293 main.go:141] libmachine: (no-preload-658545) Reserving static IP address...
	I1004 04:24:32.933950   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "no-preload-658545", mac: "52:54:00:f5:6c:11", ip: "192.168.72.54"} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:32.933970   66293 main.go:141] libmachine: (no-preload-658545) Reserved static IP address: 192.168.72.54
	I1004 04:24:32.933988   66293 main.go:141] libmachine: (no-preload-658545) DBG | skip adding static IP to network mk-no-preload-658545 - found existing host DHCP lease matching {name: "no-preload-658545", mac: "52:54:00:f5:6c:11", ip: "192.168.72.54"}
	I1004 04:24:32.934002   66293 main.go:141] libmachine: (no-preload-658545) DBG | Getting to WaitForSSH function...
	I1004 04:24:32.934016   66293 main.go:141] libmachine: (no-preload-658545) Waiting for SSH to be available...
	I1004 04:24:32.936089   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.936440   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:32.936471   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.936572   66293 main.go:141] libmachine: (no-preload-658545) DBG | Using SSH client type: external
	I1004 04:24:32.936599   66293 main.go:141] libmachine: (no-preload-658545) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa (-rw-------)
	I1004 04:24:32.936637   66293 main.go:141] libmachine: (no-preload-658545) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:24:32.936650   66293 main.go:141] libmachine: (no-preload-658545) DBG | About to run SSH command:
	I1004 04:24:32.936661   66293 main.go:141] libmachine: (no-preload-658545) DBG | exit 0
	I1004 04:24:33.064432   66293 main.go:141] libmachine: (no-preload-658545) DBG | SSH cmd err, output: <nil>: 
	I1004 04:24:33.064791   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetConfigRaw
	I1004 04:24:33.065494   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:33.068038   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.068302   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.068325   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.068580   66293 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/config.json ...
	I1004 04:24:33.068837   66293 machine.go:93] provisionDockerMachine start ...
	I1004 04:24:33.068858   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:33.069072   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.071425   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.071748   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.071819   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.071946   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.072166   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.072332   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.072429   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.072587   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.072799   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.072814   66293 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:24:33.184623   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:24:33.184656   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.184912   66293 buildroot.go:166] provisioning hostname "no-preload-658545"
	I1004 04:24:33.184946   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.185126   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.188804   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.189189   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.189222   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.189419   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.189664   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.189839   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.190002   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.190128   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.190300   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.190313   66293 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-658545 && echo "no-preload-658545" | sudo tee /etc/hostname
	I1004 04:24:33.316349   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-658545
	
	I1004 04:24:33.316381   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.319460   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.319908   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.319945   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.320110   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.320301   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.320475   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.320628   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.320811   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.321031   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.321058   66293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-658545' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-658545/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-658545' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:24:28.874265   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:29.374364   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:29.874581   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:30.373909   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:30.874089   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.374708   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.874696   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:32.374061   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:32.874233   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:33.374290   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.050105   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:33.549870   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:33.444185   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:24:33.444221   66293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:24:33.444246   66293 buildroot.go:174] setting up certificates
	I1004 04:24:33.444257   66293 provision.go:84] configureAuth start
	I1004 04:24:33.444273   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.444569   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:33.447726   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.448137   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.448168   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.448332   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.450903   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.451311   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.451340   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.451479   66293 provision.go:143] copyHostCerts
	I1004 04:24:33.451559   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:24:33.451571   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:24:33.451638   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:24:33.451748   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:24:33.451763   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:24:33.451818   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:24:33.451897   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:24:33.451906   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:24:33.451931   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:24:33.451992   66293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.no-preload-658545 san=[127.0.0.1 192.168.72.54 localhost minikube no-preload-658545]
	I1004 04:24:33.577106   66293 provision.go:177] copyRemoteCerts
	I1004 04:24:33.577160   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:24:33.577183   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.579990   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.580330   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.580359   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.580496   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.580672   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.580810   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.580937   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:33.671123   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:24:33.697805   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1004 04:24:33.725408   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:24:33.751285   66293 provision.go:87] duration metric: took 307.010531ms to configureAuth
	I1004 04:24:33.751315   66293 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:24:33.751553   66293 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:33.751651   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.754476   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.754896   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.754938   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.755087   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.755282   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.755450   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.755592   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.755723   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.755969   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.755987   66293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:24:33.996596   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:24:33.996625   66293 machine.go:96] duration metric: took 927.772762ms to provisionDockerMachine
	I1004 04:24:33.996636   66293 start.go:293] postStartSetup for "no-preload-658545" (driver="kvm2")
	I1004 04:24:33.996645   66293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:24:33.996662   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:33.996958   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:24:33.996981   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.999632   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.000082   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.000111   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.000324   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.000537   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.000733   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.000924   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.089338   66293 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:24:34.094278   66293 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:24:34.094303   66293 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:24:34.094377   66293 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:24:34.094468   66293 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:24:34.094597   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:24:34.105335   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:34.134191   66293 start.go:296] duration metric: took 137.541908ms for postStartSetup
	I1004 04:24:34.134243   66293 fix.go:56] duration metric: took 19.393079344s for fixHost
	I1004 04:24:34.134269   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.137227   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.137599   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.137638   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.137779   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.137978   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.138156   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.138289   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.138459   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:34.138652   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:34.138663   66293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:24:34.250671   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015874.218795126
	
	I1004 04:24:34.250699   66293 fix.go:216] guest clock: 1728015874.218795126
	I1004 04:24:34.250709   66293 fix.go:229] Guest: 2024-10-04 04:24:34.218795126 +0000 UTC Remote: 2024-10-04 04:24:34.134249208 +0000 UTC m=+355.755571497 (delta=84.545918ms)
	I1004 04:24:34.250735   66293 fix.go:200] guest clock delta is within tolerance: 84.545918ms
	I1004 04:24:34.250742   66293 start.go:83] releasing machines lock for "no-preload-658545", held for 19.509615446s
	I1004 04:24:34.250763   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.250965   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:34.254332   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.254720   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.254746   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.254982   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255550   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255745   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255843   66293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:24:34.255907   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.255973   66293 ssh_runner.go:195] Run: cat /version.json
	I1004 04:24:34.255996   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.258802   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259036   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259118   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.259143   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259309   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.259487   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.259538   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.259563   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259633   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.259752   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.259845   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.259891   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.260042   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.260180   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.362345   66293 ssh_runner.go:195] Run: systemctl --version
	I1004 04:24:34.368641   66293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:24:34.527679   66293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:24:34.534212   66293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:24:34.534291   66293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:24:34.553539   66293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:24:34.553570   66293 start.go:495] detecting cgroup driver to use...
	I1004 04:24:34.553638   66293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:24:34.573489   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:24:34.588220   66293 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:24:34.588281   66293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:24:34.606014   66293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:24:34.621246   66293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:24:34.749423   66293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:24:34.915880   66293 docker.go:233] disabling docker service ...
	I1004 04:24:34.915960   66293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:24:34.936625   66293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:24:34.951534   66293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:24:35.089398   66293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:24:35.225269   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:24:35.241006   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:24:35.261586   66293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:24:35.261651   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.273501   66293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:24:35.273571   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.285392   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.296475   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.307774   66293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:24:35.319241   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.330361   66293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.349013   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.360603   66293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:24:35.371516   66293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:24:35.371581   66293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:24:35.387209   66293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:24:35.398144   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:35.528196   66293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:24:35.629120   66293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:24:35.629198   66293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:24:35.634243   66293 start.go:563] Will wait 60s for crictl version
	I1004 04:24:35.634307   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:35.638372   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:24:35.678659   66293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:24:35.678763   66293 ssh_runner.go:195] Run: crio --version
	I1004 04:24:35.715285   66293 ssh_runner.go:195] Run: crio --version
	I1004 04:24:35.751571   66293 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:24:34.228500   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:36.727080   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:37.228706   67541 node_ready.go:49] node "default-k8s-diff-port-281471" has status "Ready":"True"
	I1004 04:24:37.228745   67541 node_ready.go:38] duration metric: took 7.005123712s for node "default-k8s-diff-port-281471" to be "Ready" ...
	I1004 04:24:37.228760   67541 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:37.235256   67541 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:35.752737   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:35.755375   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:35.755763   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:35.755818   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:35.756063   66293 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1004 04:24:35.760601   66293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:35.773870   66293 kubeadm.go:883] updating cluster {Name:no-preload-658545 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:24:35.773970   66293 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:24:35.774001   66293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:35.813619   66293 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:24:35.813650   66293 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:24:35.813736   66293 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:35.813756   66293 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.813785   66293 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1004 04:24:35.813796   66293 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.813877   66293 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:35.813740   66293 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.813758   66293 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.813771   66293 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.815271   66293 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:35.815277   66293 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1004 04:24:35.815292   66293 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.815271   66293 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.815276   66293 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:35.815353   66293 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.815358   66293 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.815402   66293 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.956470   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.963066   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.965110   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.970080   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.972477   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.988253   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.013802   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1004 04:24:36.063322   66293 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1004 04:24:36.063364   66293 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.063405   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214786   66293 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1004 04:24:36.214827   66293 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.214867   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214928   66293 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1004 04:24:36.214961   66293 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1004 04:24:36.214995   66293 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.215023   66293 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1004 04:24:36.215043   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214965   66293 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.215081   66293 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1004 04:24:36.215047   66293 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.215100   66293 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.215110   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.215139   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.215147   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.274103   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.274103   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.274185   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.274292   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.274329   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.274343   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.392523   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.405236   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.405257   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.408799   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.408857   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.408860   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.511001   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.568598   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.568658   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.568720   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.568929   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.569021   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.599594   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1004 04:24:36.599733   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.696242   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1004 04:24:36.696294   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1004 04:24:36.696336   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1004 04:24:36.696363   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:36.696390   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:36.696399   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:36.696401   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1004 04:24:36.696449   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1004 04:24:36.696507   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:36.696521   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:36.696508   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1004 04:24:36.696563   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.696613   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.701522   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1004 04:24:37.132809   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:33.874344   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:34.374158   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:34.873848   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:35.373944   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:35.874697   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.373831   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.874231   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:37.374723   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:37.873861   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:38.374206   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.050420   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:38.051653   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:39.242026   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:41.244977   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:39.289977   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.593422519s)
	I1004 04:24:39.290020   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1004 04:24:39.290087   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.593446646s)
	I1004 04:24:39.290114   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1004 04:24:39.290136   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:39.290158   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.593739386s)
	I1004 04:24:39.290175   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1004 04:24:39.290097   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.593563637s)
	I1004 04:24:39.290203   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.593795645s)
	I1004 04:24:39.290208   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:39.290213   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1004 04:24:39.290213   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1004 04:24:39.290265   66293 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.157417466s)
	I1004 04:24:39.290314   66293 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1004 04:24:39.290348   66293 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:39.290392   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:40.750955   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.460708297s)
	I1004 04:24:40.751065   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1004 04:24:40.751102   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:40.750969   66293 ssh_runner.go:235] Completed: which crictl: (1.460561899s)
	I1004 04:24:40.751159   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:40.751190   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:43.031349   66293 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.280136047s)
	I1004 04:24:43.031395   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.280209115s)
	I1004 04:24:43.031566   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1004 04:24:43.031493   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:43.031600   66293 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:43.031641   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:43.084191   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:38.873705   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:39.374361   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:39.874144   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.373793   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.873796   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:41.374744   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:41.874442   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:42.374561   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:42.874638   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:43.374677   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.548818   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:42.550744   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:43.742554   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:44.244427   67541 pod_ready.go:93] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.244453   67541 pod_ready.go:82] duration metric: took 7.009169057s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.244463   67541 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.250595   67541 pod_ready.go:93] pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.250617   67541 pod_ready.go:82] duration metric: took 6.147481ms for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.250625   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.256537   67541 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.256570   67541 pod_ready.go:82] duration metric: took 5.936641ms for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.256583   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.262681   67541 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.262707   67541 pod_ready.go:82] duration metric: took 6.115804ms for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.262721   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.271089   67541 pod_ready.go:93] pod "kube-proxy-4nnld" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.271124   67541 pod_ready.go:82] duration metric: took 8.394207ms for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.271138   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.640124   67541 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.640158   67541 pod_ready.go:82] duration metric: took 369.009816ms for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.640172   67541 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:46.647420   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:45.132971   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.101305613s)
	I1004 04:24:45.133043   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1004 04:24:45.133071   66293 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.048844025s)
	I1004 04:24:45.133079   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:45.133110   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1004 04:24:45.133135   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:45.133179   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:47.228047   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.094844592s)
	I1004 04:24:47.228087   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1004 04:24:47.228089   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.0949275s)
	I1004 04:24:47.228119   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1004 04:24:47.228154   66293 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:47.228214   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:43.874583   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:44.374117   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:44.874398   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.374755   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.874039   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:46.374598   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:46.874446   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:47.374384   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:47.874596   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:48.374021   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.049760   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:47.551861   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:48.647700   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:50.648288   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:52.649288   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:50.627043   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.398805191s)
	I1004 04:24:50.627085   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1004 04:24:50.627122   66293 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:50.627191   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:51.282056   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1004 04:24:51.282099   66293 cache_images.go:123] Successfully loaded all cached images
	I1004 04:24:51.282104   66293 cache_images.go:92] duration metric: took 15.468441268s to LoadCachedImages
	I1004 04:24:51.282116   66293 kubeadm.go:934] updating node { 192.168.72.54 8443 v1.31.1 crio true true} ...
	I1004 04:24:51.282243   66293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-658545 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:51.282321   66293 ssh_runner.go:195] Run: crio config
	I1004 04:24:51.333133   66293 cni.go:84] Creating CNI manager for ""
	I1004 04:24:51.333162   66293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:51.333173   66293 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:51.333201   66293 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-658545 NodeName:no-preload-658545 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:24:51.333361   66293 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-658545"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:51.333419   66293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:24:51.344694   66293 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:51.344757   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:51.354990   66293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1004 04:24:51.372572   66293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:51.394129   66293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1004 04:24:51.412865   66293 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:51.416985   66293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:51.430835   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:51.559349   66293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:51.579093   66293 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545 for IP: 192.168.72.54
	I1004 04:24:51.579120   66293 certs.go:194] generating shared ca certs ...
	I1004 04:24:51.579140   66293 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:51.579318   66293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:51.579378   66293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:51.579391   66293 certs.go:256] generating profile certs ...
	I1004 04:24:51.579494   66293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/client.key
	I1004 04:24:51.579588   66293 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.key.10ceac04
	I1004 04:24:51.579648   66293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.key
	I1004 04:24:51.579808   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:51.579849   66293 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:51.579861   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:51.579891   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:51.579926   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:51.579961   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:51.580018   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:51.580871   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:51.630190   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:51.667887   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:51.715372   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:51.750063   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1004 04:24:51.776606   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:24:51.808943   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:51.839165   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:24:51.867862   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:51.898026   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:51.926810   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:51.955416   66293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:51.977621   66293 ssh_runner.go:195] Run: openssl version
	I1004 04:24:51.984023   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:51.997672   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.002969   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.003039   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.009473   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:52.021001   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:52.032834   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.037679   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.037742   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.044012   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:52.055377   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:52.066222   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.070747   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.070794   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.076922   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:52.087952   66293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:52.093052   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:52.099710   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:52.105841   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:52.112092   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:52.118428   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:52.125380   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:52.132085   66293 kubeadm.go:392] StartCluster: {Name:no-preload-658545 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:52.132193   66293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:52.132254   66293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:52.171814   66293 cri.go:89] found id: ""
	I1004 04:24:52.171882   66293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:52.182484   66293 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:52.182508   66293 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:52.182559   66293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:52.193069   66293 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:52.194108   66293 kubeconfig.go:125] found "no-preload-658545" server: "https://192.168.72.54:8443"
	I1004 04:24:52.196237   66293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:52.206551   66293 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I1004 04:24:52.206584   66293 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:52.206598   66293 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:52.206657   66293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:52.249698   66293 cri.go:89] found id: ""
	I1004 04:24:52.249762   66293 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:52.266001   66293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:52.276056   66293 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:52.276081   66293 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:52.276128   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:24:52.285610   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:52.285677   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:52.295177   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:24:52.304309   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:52.304362   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:52.314126   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:24:52.323562   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:52.323618   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:52.332906   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:24:52.342199   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:52.342252   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:52.351661   66293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:52.361071   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:52.493171   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:48.874471   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:49.374480   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:49.874689   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.373726   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.874543   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:51.373743   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:51.874513   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:52.374719   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:52.874305   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:53.374419   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.049668   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:52.050522   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:55.147282   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:57.648169   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:53.586422   66293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.093219868s)
	I1004 04:24:53.586448   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:53.794085   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:53.872327   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:54.004418   66293 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:54.004510   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.505463   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.004602   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.036834   66293 api_server.go:72] duration metric: took 1.032414365s to wait for apiserver process to appear ...
	I1004 04:24:55.036858   66293 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:24:55.036877   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:55.037325   66293 api_server.go:269] stopped: https://192.168.72.54:8443/healthz: Get "https://192.168.72.54:8443/healthz": dial tcp 192.168.72.54:8443: connect: connection refused
	I1004 04:24:55.537513   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:57.951637   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:57.951663   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:57.951676   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.010162   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:58.010188   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:58.037484   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.060069   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:58.060161   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:53.874725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.373903   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.874127   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.374051   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.874019   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:56.373828   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:56.874027   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:57.373914   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:57.874598   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:58.374106   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.550080   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:56.550541   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:59.051837   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:58.536932   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.541611   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:58.541634   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:59.037723   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:59.057378   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:59.057411   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:59.536994   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:59.545827   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I1004 04:24:59.554199   66293 api_server.go:141] control plane version: v1.31.1
	I1004 04:24:59.554238   66293 api_server.go:131] duration metric: took 4.517373336s to wait for apiserver health ...
	I1004 04:24:59.554247   66293 cni.go:84] Creating CNI manager for ""
	I1004 04:24:59.554253   66293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:59.555912   66293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:24:59.557009   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:24:59.590146   66293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:24:59.610903   66293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:24:59.634067   66293 system_pods.go:59] 8 kube-system pods found
	I1004 04:24:59.634109   66293 system_pods.go:61] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:24:59.634121   66293 system_pods.go:61] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:24:59.634131   66293 system_pods.go:61] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:24:59.634143   66293 system_pods.go:61] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:24:59.634151   66293 system_pods.go:61] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 04:24:59.634160   66293 system_pods.go:61] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:24:59.634168   66293 system_pods.go:61] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:24:59.634181   66293 system_pods.go:61] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 04:24:59.634189   66293 system_pods.go:74] duration metric: took 23.257716ms to wait for pod list to return data ...
	I1004 04:24:59.634198   66293 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:24:59.638128   66293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:24:59.638160   66293 node_conditions.go:123] node cpu capacity is 2
	I1004 04:24:59.638173   66293 node_conditions.go:105] duration metric: took 3.969841ms to run NodePressure ...
	I1004 04:24:59.638191   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:59.968829   66293 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:24:59.975495   66293 kubeadm.go:739] kubelet initialised
	I1004 04:24:59.975516   66293 kubeadm.go:740] duration metric: took 6.660196ms waiting for restarted kubelet to initialise ...
	I1004 04:24:59.975522   66293 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:00.084084   66293 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.113474   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.113498   66293 pod_ready.go:82] duration metric: took 29.379607ms for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.113507   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.113513   66293 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.128436   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "etcd-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.128463   66293 pod_ready.go:82] duration metric: took 14.94278ms for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.128475   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "etcd-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.128485   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.140033   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-apiserver-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.140059   66293 pod_ready.go:82] duration metric: took 11.56545ms for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.140068   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-apiserver-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.140077   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.157254   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.157286   66293 pod_ready.go:82] duration metric: took 17.197805ms for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.157298   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.157306   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.415110   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-proxy-dvr6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.415141   66293 pod_ready.go:82] duration metric: took 257.824162ms for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.415151   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-proxy-dvr6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.415157   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.815201   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-scheduler-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.815226   66293 pod_ready.go:82] duration metric: took 400.063468ms for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.815235   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-scheduler-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.815241   66293 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:01.214416   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:01.214448   66293 pod_ready.go:82] duration metric: took 399.197779ms for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:01.214461   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:01.214468   66293 pod_ready.go:39] duration metric: took 1.238937842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:01.214484   66293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:25:01.227389   66293 ops.go:34] apiserver oom_adj: -16
	I1004 04:25:01.227414   66293 kubeadm.go:597] duration metric: took 9.044898439s to restartPrimaryControlPlane
	I1004 04:25:01.227424   66293 kubeadm.go:394] duration metric: took 9.095346513s to StartCluster
	I1004 04:25:01.227441   66293 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:25:01.227520   66293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:25:01.229057   66293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:25:01.229318   66293 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:25:01.229389   66293 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:25:01.229496   66293 addons.go:69] Setting storage-provisioner=true in profile "no-preload-658545"
	I1004 04:25:01.229505   66293 addons.go:69] Setting default-storageclass=true in profile "no-preload-658545"
	I1004 04:25:01.229512   66293 addons.go:234] Setting addon storage-provisioner=true in "no-preload-658545"
	W1004 04:25:01.229520   66293 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:25:01.229524   66293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-658545"
	I1004 04:25:01.229558   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.229562   66293 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:25:01.229557   66293 addons.go:69] Setting metrics-server=true in profile "no-preload-658545"
	I1004 04:25:01.229607   66293 addons.go:234] Setting addon metrics-server=true in "no-preload-658545"
	W1004 04:25:01.229621   66293 addons.go:243] addon metrics-server should already be in state true
	I1004 04:25:01.229655   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.229968   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.229987   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.229971   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.230013   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.230030   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.230133   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.231051   66293 out.go:177] * Verifying Kubernetes components...
	I1004 04:25:01.232578   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:25:01.256283   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38313
	I1004 04:25:01.256939   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.257689   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.257720   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.258124   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.258358   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.262593   66293 addons.go:234] Setting addon default-storageclass=true in "no-preload-658545"
	W1004 04:25:01.262620   66293 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:25:01.262652   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.263036   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.263117   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.274653   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I1004 04:25:01.275130   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.275655   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.275685   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.276062   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.276652   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.276697   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.277272   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I1004 04:25:01.277756   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.278175   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.278191   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.278548   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.279116   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.279163   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.283719   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1004 04:25:01.284316   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.284814   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.284836   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.285180   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.285751   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.285801   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.297682   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I1004 04:25:01.297859   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I1004 04:25:01.298298   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.298418   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.298975   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.298995   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.299058   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.299077   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.299407   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.299470   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.299618   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.299660   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.301552   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.302048   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.303197   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1004 04:25:01.303600   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.304053   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.304068   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.304124   66293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:25:01.304234   66293 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:25:01.304403   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.304571   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.305715   66293 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:25:01.305735   66293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:25:01.305850   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:25:01.305861   66293 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:25:01.305876   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.305752   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.306101   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.306321   66293 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:25:01.306334   66293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:25:01.306349   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.310374   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.310752   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.310776   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.310888   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.311057   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.311192   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.311272   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.311338   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.311603   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312049   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.312072   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312175   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.312201   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312302   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.312468   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.312497   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.312586   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.312658   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.312681   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.312811   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.312948   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.478533   66293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:25:01.511716   66293 node_ready.go:35] waiting up to 6m0s for node "no-preload-658545" to be "Ready" ...
	I1004 04:25:01.557879   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:25:01.574381   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:25:01.601090   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:25:01.601112   66293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:25:01.630465   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:25:01.630495   66293 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:25:01.681089   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:25:01.681118   66293 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:25:01.703024   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:25:02.053562   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.053585   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.053855   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.053871   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.053882   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.053891   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.054118   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.054139   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.054128   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.061624   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.061646   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.061949   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.061967   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.061985   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.580950   66293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.00653263s)
	I1004 04:25:02.581002   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.581014   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.581350   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.581368   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.581376   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.581382   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.581459   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.581594   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.581606   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.702713   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.702739   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.703015   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.703028   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.703090   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.703106   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.703117   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.703347   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.703363   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.703380   66293 addons.go:475] Verifying addon metrics-server=true in "no-preload-658545"
	I1004 04:25:02.705335   66293 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 04:24:59.648241   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:01.649424   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:02.706605   66293 addons.go:510] duration metric: took 1.477226s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 04:24:58.874143   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:59.373810   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:59.874682   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:00.374672   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:00.873725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.374175   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.874724   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:02.374725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:02.874746   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:03.373689   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.548783   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:03.549515   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:04.146633   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:06.147540   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.147626   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:03.516566   66293 node_ready.go:53] node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:06.022815   66293 node_ready.go:53] node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:03.874594   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:04.374498   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:04.874377   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:05.374050   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:05.374139   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:05.412153   67282 cri.go:89] found id: ""
	I1004 04:25:05.412185   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.412195   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:05.412202   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:05.412264   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:05.446725   67282 cri.go:89] found id: ""
	I1004 04:25:05.446750   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.446758   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:05.446763   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:05.446816   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:05.487652   67282 cri.go:89] found id: ""
	I1004 04:25:05.487678   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.487686   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:05.487691   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:05.487752   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:05.526275   67282 cri.go:89] found id: ""
	I1004 04:25:05.526302   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.526310   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:05.526319   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:05.526375   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:05.565004   67282 cri.go:89] found id: ""
	I1004 04:25:05.565034   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.565045   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:05.565052   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:05.565101   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:05.601963   67282 cri.go:89] found id: ""
	I1004 04:25:05.601990   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.601998   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:05.602003   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:05.602051   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:05.638621   67282 cri.go:89] found id: ""
	I1004 04:25:05.638651   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.638660   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:05.638666   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:05.638720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:05.678042   67282 cri.go:89] found id: ""
	I1004 04:25:05.678071   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.678082   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:05.678093   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:05.678107   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:05.720677   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:05.720707   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:05.775219   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:05.775252   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:05.789748   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:05.789774   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:05.918752   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:05.918783   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:05.918798   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:08.493206   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:05.551035   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.048870   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:10.148154   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.645708   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.516666   66293 node_ready.go:49] node "no-preload-658545" has status "Ready":"True"
	I1004 04:25:08.516690   66293 node_ready.go:38] duration metric: took 7.004939371s for node "no-preload-658545" to be "Ready" ...
	I1004 04:25:08.516699   66293 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:08.522101   66293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.527132   66293 pod_ready.go:93] pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:08.527153   66293 pod_ready.go:82] duration metric: took 5.024648ms for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.527162   66293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.534172   66293 pod_ready.go:93] pod "etcd-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:08.534195   66293 pod_ready.go:82] duration metric: took 7.027189ms for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.534204   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:10.541186   66293 pod_ready.go:103] pod "kube-apiserver-no-preload-658545" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.040607   66293 pod_ready.go:93] pod "kube-apiserver-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.040640   66293 pod_ready.go:82] duration metric: took 3.506428875s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.040654   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.045845   66293 pod_ready.go:93] pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.045870   66293 pod_ready.go:82] duration metric: took 5.207108ms for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.045883   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.051587   66293 pod_ready.go:93] pod "kube-proxy-dvr6b" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.051604   66293 pod_ready.go:82] duration metric: took 5.715328ms for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.051613   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.116361   66293 pod_ready.go:93] pod "kube-scheduler-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.116401   66293 pod_ready.go:82] duration metric: took 64.774234ms for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.116411   66293 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.506490   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:08.506549   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:08.545875   67282 cri.go:89] found id: ""
	I1004 04:25:08.545909   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.545920   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:08.545933   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:08.545997   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:08.582348   67282 cri.go:89] found id: ""
	I1004 04:25:08.582375   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.582383   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:08.582389   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:08.582438   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:08.637763   67282 cri.go:89] found id: ""
	I1004 04:25:08.637797   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.637809   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:08.637816   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:08.637890   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:08.681171   67282 cri.go:89] found id: ""
	I1004 04:25:08.681205   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.681216   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:08.681224   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:08.681289   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:08.719513   67282 cri.go:89] found id: ""
	I1004 04:25:08.719542   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.719549   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:08.719555   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:08.719607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:08.762152   67282 cri.go:89] found id: ""
	I1004 04:25:08.762175   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.762183   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:08.762188   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:08.762251   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:08.799857   67282 cri.go:89] found id: ""
	I1004 04:25:08.799881   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.799892   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:08.799903   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:08.799954   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:08.835264   67282 cri.go:89] found id: ""
	I1004 04:25:08.835296   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.835308   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:08.835318   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:08.835330   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:08.875501   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:08.875532   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:08.929145   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:08.929178   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:08.942769   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:08.942808   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:09.025372   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:09.025401   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:09.025416   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:11.611179   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:11.625118   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:11.625253   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:11.661512   67282 cri.go:89] found id: ""
	I1004 04:25:11.661540   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.661547   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:11.661553   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:11.661607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:11.704902   67282 cri.go:89] found id: ""
	I1004 04:25:11.704931   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.704941   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:11.704948   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:11.705007   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:11.741747   67282 cri.go:89] found id: ""
	I1004 04:25:11.741770   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.741780   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:11.741787   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:11.741841   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:11.776838   67282 cri.go:89] found id: ""
	I1004 04:25:11.776863   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.776871   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:11.776876   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:11.776927   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:11.812996   67282 cri.go:89] found id: ""
	I1004 04:25:11.813024   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.813033   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:11.813038   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:11.813097   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:11.853718   67282 cri.go:89] found id: ""
	I1004 04:25:11.853744   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.853752   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:11.853758   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:11.853813   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:11.896840   67282 cri.go:89] found id: ""
	I1004 04:25:11.896867   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.896879   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:11.896885   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:11.896943   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:11.932529   67282 cri.go:89] found id: ""
	I1004 04:25:11.932552   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.932561   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:11.932569   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:11.932580   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:11.946504   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:11.946538   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:12.024692   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:12.024713   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:12.024724   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:12.111942   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:12.111976   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:12.156483   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:12.156522   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:10.049912   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.051024   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.646058   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:16.647214   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.123343   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:16.622947   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.708243   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:14.722943   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:14.723007   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:14.758502   67282 cri.go:89] found id: ""
	I1004 04:25:14.758555   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.758567   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:14.758575   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:14.758633   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:14.796496   67282 cri.go:89] found id: ""
	I1004 04:25:14.796525   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.796532   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:14.796538   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:14.796595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:14.832216   67282 cri.go:89] found id: ""
	I1004 04:25:14.832247   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.832259   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:14.832266   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:14.832330   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:14.868461   67282 cri.go:89] found id: ""
	I1004 04:25:14.868491   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.868501   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:14.868509   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:14.868568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:14.909827   67282 cri.go:89] found id: ""
	I1004 04:25:14.909857   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.909867   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:14.909875   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:14.909949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:14.947809   67282 cri.go:89] found id: ""
	I1004 04:25:14.947839   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.947850   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:14.947857   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:14.947904   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:14.984073   67282 cri.go:89] found id: ""
	I1004 04:25:14.984101   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.984110   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:14.984115   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:14.984170   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:15.021145   67282 cri.go:89] found id: ""
	I1004 04:25:15.021179   67282 logs.go:282] 0 containers: []
	W1004 04:25:15.021191   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:15.021204   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:15.021217   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:15.075295   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:15.075328   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:15.088953   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:15.088980   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:15.175103   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:15.175128   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:15.175143   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:15.259004   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:15.259044   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:17.825029   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:17.839496   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:17.839574   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:17.877643   67282 cri.go:89] found id: ""
	I1004 04:25:17.877673   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.877684   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:17.877692   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:17.877751   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:17.921534   67282 cri.go:89] found id: ""
	I1004 04:25:17.921563   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.921574   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:17.921581   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:17.921634   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:17.961281   67282 cri.go:89] found id: ""
	I1004 04:25:17.961307   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.961315   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:17.961320   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:17.961386   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:18.001036   67282 cri.go:89] found id: ""
	I1004 04:25:18.001066   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.001078   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:18.001085   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:18.001156   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:18.043212   67282 cri.go:89] found id: ""
	I1004 04:25:18.043241   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.043252   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:18.043259   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:18.043319   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:18.082399   67282 cri.go:89] found id: ""
	I1004 04:25:18.082423   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.082430   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:18.082435   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:18.082493   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:18.120507   67282 cri.go:89] found id: ""
	I1004 04:25:18.120534   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.120544   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:18.120550   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:18.120605   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:18.156601   67282 cri.go:89] found id: ""
	I1004 04:25:18.156629   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.156640   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:18.156650   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:18.156663   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:18.198393   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:18.198424   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:18.250992   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:18.251032   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:18.267984   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:18.268015   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:18.343283   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:18.343303   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:18.343314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:14.549511   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:17.048940   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:19.051125   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:18.648462   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:21.146813   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.147244   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:18.624165   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:20.627159   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.123629   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:20.922578   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:20.938037   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:20.938122   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:20.978389   67282 cri.go:89] found id: ""
	I1004 04:25:20.978417   67282 logs.go:282] 0 containers: []
	W1004 04:25:20.978426   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:20.978431   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:20.978478   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:21.033490   67282 cri.go:89] found id: ""
	I1004 04:25:21.033520   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.033528   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:21.033533   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:21.033589   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:21.087168   67282 cri.go:89] found id: ""
	I1004 04:25:21.087198   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.087209   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:21.087216   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:21.087299   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:21.144327   67282 cri.go:89] found id: ""
	I1004 04:25:21.144356   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.144366   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:21.144373   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:21.144431   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:21.183336   67282 cri.go:89] found id: ""
	I1004 04:25:21.183378   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.183390   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:21.183397   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:21.183459   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:21.221847   67282 cri.go:89] found id: ""
	I1004 04:25:21.221878   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.221892   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:21.221901   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:21.221961   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:21.258542   67282 cri.go:89] found id: ""
	I1004 04:25:21.258573   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.258584   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:21.258590   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:21.258652   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:21.303173   67282 cri.go:89] found id: ""
	I1004 04:25:21.303202   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.303211   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:21.303218   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:21.303243   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:21.358109   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:21.358146   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:21.373958   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:21.373987   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:21.450956   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:21.450980   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:21.451006   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:21.534763   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:21.534807   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:21.550109   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.550304   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:25.148868   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:27.647698   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:25.622123   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:27.624777   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:24.082856   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:24.098263   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:24.098336   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:24.144969   67282 cri.go:89] found id: ""
	I1004 04:25:24.144999   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.145009   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:24.145015   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:24.145072   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:24.185670   67282 cri.go:89] found id: ""
	I1004 04:25:24.185693   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.185702   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:24.185708   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:24.185769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:24.223657   67282 cri.go:89] found id: ""
	I1004 04:25:24.223691   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.223703   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:24.223710   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:24.223769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:24.261841   67282 cri.go:89] found id: ""
	I1004 04:25:24.261864   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.261872   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:24.261878   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:24.261938   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:24.299734   67282 cri.go:89] found id: ""
	I1004 04:25:24.299758   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.299769   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:24.299775   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:24.299867   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:24.337413   67282 cri.go:89] found id: ""
	I1004 04:25:24.337440   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.337450   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:24.337457   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:24.337523   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:24.375963   67282 cri.go:89] found id: ""
	I1004 04:25:24.375995   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.376007   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:24.376014   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:24.376073   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:24.415978   67282 cri.go:89] found id: ""
	I1004 04:25:24.416010   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.416021   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:24.416030   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:24.416045   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:24.458703   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:24.458738   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:24.510669   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:24.510704   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:24.525646   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:24.525687   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:24.603280   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:24.603310   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:24.603324   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:27.184935   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:27.200241   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:27.200321   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:27.237546   67282 cri.go:89] found id: ""
	I1004 04:25:27.237576   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.237588   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:27.237596   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:27.237653   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:27.272598   67282 cri.go:89] found id: ""
	I1004 04:25:27.272625   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.272634   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:27.272642   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:27.272700   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:27.306659   67282 cri.go:89] found id: ""
	I1004 04:25:27.306693   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.306706   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:27.306715   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:27.306779   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:27.344315   67282 cri.go:89] found id: ""
	I1004 04:25:27.344349   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.344363   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:27.344370   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:27.344428   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:27.380231   67282 cri.go:89] found id: ""
	I1004 04:25:27.380267   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.380278   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:27.380286   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:27.380346   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:27.418137   67282 cri.go:89] found id: ""
	I1004 04:25:27.418161   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.418169   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:27.418174   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:27.418225   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:27.458235   67282 cri.go:89] found id: ""
	I1004 04:25:27.458262   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.458283   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:27.458289   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:27.458342   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:27.495161   67282 cri.go:89] found id: ""
	I1004 04:25:27.495189   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.495198   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:27.495206   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:27.495217   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:27.547749   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:27.547795   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:27.563322   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:27.563355   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:27.636682   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:27.636710   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:27.636725   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:27.711316   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:27.711354   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:26.050001   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:28.548322   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.147210   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:32.647027   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.122267   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:32.122501   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.250361   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:30.265789   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:30.265866   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:30.305127   67282 cri.go:89] found id: ""
	I1004 04:25:30.305166   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.305183   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:30.305190   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:30.305258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:30.346529   67282 cri.go:89] found id: ""
	I1004 04:25:30.346560   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.346570   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:30.346577   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:30.346641   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:30.387368   67282 cri.go:89] found id: ""
	I1004 04:25:30.387407   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.387418   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:30.387425   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:30.387489   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:30.428193   67282 cri.go:89] found id: ""
	I1004 04:25:30.428230   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.428242   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:30.428248   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:30.428308   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:30.465484   67282 cri.go:89] found id: ""
	I1004 04:25:30.465509   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.465518   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:30.465523   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:30.465573   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:30.501133   67282 cri.go:89] found id: ""
	I1004 04:25:30.501163   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.501174   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:30.501181   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:30.501248   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:30.536492   67282 cri.go:89] found id: ""
	I1004 04:25:30.536519   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.536530   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:30.536536   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:30.536587   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:30.571721   67282 cri.go:89] found id: ""
	I1004 04:25:30.571745   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.571753   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:30.571761   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:30.571771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:30.626922   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:30.626958   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:30.641817   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:30.641852   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:30.725604   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:30.725633   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:30.725647   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:30.800359   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:30.800393   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:33.340747   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:33.355862   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:33.355936   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:33.397628   67282 cri.go:89] found id: ""
	I1004 04:25:33.397655   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.397662   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:33.397668   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:33.397718   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:33.442100   67282 cri.go:89] found id: ""
	I1004 04:25:33.442128   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.442137   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:33.442142   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:33.442187   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:33.481035   67282 cri.go:89] found id: ""
	I1004 04:25:33.481063   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.481076   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:33.481083   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:33.481149   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:30.549816   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:33.048791   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:35.147125   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:37.647224   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:34.122573   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:36.622639   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:33.516633   67282 cri.go:89] found id: ""
	I1004 04:25:33.516661   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.516669   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:33.516677   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:33.516727   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:33.556569   67282 cri.go:89] found id: ""
	I1004 04:25:33.556600   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.556610   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:33.556617   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:33.556679   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:33.591678   67282 cri.go:89] found id: ""
	I1004 04:25:33.591715   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.591724   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:33.591731   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:33.591786   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:33.626571   67282 cri.go:89] found id: ""
	I1004 04:25:33.626594   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.626602   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:33.626607   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:33.626650   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:33.664336   67282 cri.go:89] found id: ""
	I1004 04:25:33.664359   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.664367   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:33.664375   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:33.664386   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:33.748013   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:33.748047   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:33.786730   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:33.786767   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:33.839355   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:33.839392   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:33.853807   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:33.853835   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:33.920183   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:36.420485   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:36.435150   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:36.435221   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:36.471818   67282 cri.go:89] found id: ""
	I1004 04:25:36.471842   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.471850   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:36.471855   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:36.471908   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:36.511469   67282 cri.go:89] found id: ""
	I1004 04:25:36.511496   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.511504   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:36.511509   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:36.511557   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:36.552607   67282 cri.go:89] found id: ""
	I1004 04:25:36.552633   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.552641   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:36.552646   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:36.552702   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:36.596260   67282 cri.go:89] found id: ""
	I1004 04:25:36.596282   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.596290   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:36.596295   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:36.596340   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:36.636674   67282 cri.go:89] found id: ""
	I1004 04:25:36.636700   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.636708   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:36.636713   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:36.636764   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:36.675155   67282 cri.go:89] found id: ""
	I1004 04:25:36.675194   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.675206   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:36.675214   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:36.675279   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:36.713458   67282 cri.go:89] found id: ""
	I1004 04:25:36.713485   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.713493   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:36.713498   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:36.713552   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:36.754567   67282 cri.go:89] found id: ""
	I1004 04:25:36.754596   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.754607   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:36.754618   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:36.754631   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:36.824413   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:36.824439   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:36.824453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:36.900438   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:36.900471   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:36.942238   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:36.942264   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:36.992527   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:36.992556   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:35.050546   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:37.548965   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:39.647505   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:42.146720   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:38.623559   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:41.121785   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:43.122437   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:39.506599   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:39.520782   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:39.520854   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:39.561853   67282 cri.go:89] found id: ""
	I1004 04:25:39.561880   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.561891   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:39.561898   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:39.561955   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:39.597548   67282 cri.go:89] found id: ""
	I1004 04:25:39.597581   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.597591   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:39.597598   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:39.597659   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:39.634481   67282 cri.go:89] found id: ""
	I1004 04:25:39.634517   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.634525   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:39.634530   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:39.634575   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:39.677077   67282 cri.go:89] found id: ""
	I1004 04:25:39.677107   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.677117   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:39.677124   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:39.677185   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:39.716334   67282 cri.go:89] found id: ""
	I1004 04:25:39.716356   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.716364   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:39.716369   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:39.716416   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:39.754765   67282 cri.go:89] found id: ""
	I1004 04:25:39.754792   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.754803   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:39.754810   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:39.754863   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:39.788782   67282 cri.go:89] found id: ""
	I1004 04:25:39.788811   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.788824   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:39.788832   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:39.788890   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:39.821946   67282 cri.go:89] found id: ""
	I1004 04:25:39.821970   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.821979   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:39.821988   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:39.822001   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:39.892629   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:39.892657   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:39.892674   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:39.973480   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:39.973515   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:40.018175   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:40.018203   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:40.068585   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:40.068620   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:42.583639   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:42.597249   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:42.597333   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:42.631993   67282 cri.go:89] found id: ""
	I1004 04:25:42.632020   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.632030   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:42.632037   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:42.632091   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:42.669708   67282 cri.go:89] found id: ""
	I1004 04:25:42.669739   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.669749   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:42.669762   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:42.669836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:42.705995   67282 cri.go:89] found id: ""
	I1004 04:25:42.706019   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.706030   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:42.706037   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:42.706094   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:42.740436   67282 cri.go:89] found id: ""
	I1004 04:25:42.740458   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.740466   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:42.740472   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:42.740524   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:42.774516   67282 cri.go:89] found id: ""
	I1004 04:25:42.774546   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.774557   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:42.774564   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:42.774614   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:42.807471   67282 cri.go:89] found id: ""
	I1004 04:25:42.807502   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.807510   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:42.807516   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:42.807561   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:42.851943   67282 cri.go:89] found id: ""
	I1004 04:25:42.851968   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.851977   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:42.851983   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:42.852040   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:42.887762   67282 cri.go:89] found id: ""
	I1004 04:25:42.887801   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.887812   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:42.887822   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:42.887834   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:42.960398   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:42.960423   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:42.960440   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:43.040078   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:43.040117   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:43.081614   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:43.081638   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:43.132744   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:43.132781   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:39.551722   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:42.049418   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:44.049835   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:44.646919   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:47.146884   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:45.622878   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.122299   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:45.647332   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:45.660765   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:45.660834   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:45.696351   67282 cri.go:89] found id: ""
	I1004 04:25:45.696379   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.696390   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:45.696397   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:45.696449   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:45.738529   67282 cri.go:89] found id: ""
	I1004 04:25:45.738553   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.738561   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:45.738566   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:45.738621   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:45.773071   67282 cri.go:89] found id: ""
	I1004 04:25:45.773094   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.773103   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:45.773110   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:45.773165   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:45.810813   67282 cri.go:89] found id: ""
	I1004 04:25:45.810840   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.810852   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:45.810859   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:45.810913   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:45.848916   67282 cri.go:89] found id: ""
	I1004 04:25:45.848942   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.848951   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:45.848956   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:45.849014   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:45.886737   67282 cri.go:89] found id: ""
	I1004 04:25:45.886763   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.886772   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:45.886778   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:45.886825   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:45.922263   67282 cri.go:89] found id: ""
	I1004 04:25:45.922291   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.922301   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:45.922307   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:45.922364   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:45.956688   67282 cri.go:89] found id: ""
	I1004 04:25:45.956710   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.956718   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:45.956725   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:45.956737   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:46.007334   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:46.007365   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:46.020892   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:46.020916   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:46.089786   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:46.089809   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:46.089822   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:46.175987   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:46.176017   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:46.549153   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.549893   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:49.147322   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:51.647365   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:50.622540   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:52.623714   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.718354   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:48.733291   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:48.733347   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:48.769149   67282 cri.go:89] found id: ""
	I1004 04:25:48.769175   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.769185   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:48.769193   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:48.769249   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:48.804386   67282 cri.go:89] found id: ""
	I1004 04:25:48.804410   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.804418   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:48.804423   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:48.804467   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:48.841747   67282 cri.go:89] found id: ""
	I1004 04:25:48.841774   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.841782   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:48.841788   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:48.841836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:48.880025   67282 cri.go:89] found id: ""
	I1004 04:25:48.880048   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.880058   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:48.880064   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:48.880121   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:48.916506   67282 cri.go:89] found id: ""
	I1004 04:25:48.916530   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.916540   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:48.916547   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:48.916607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:48.952082   67282 cri.go:89] found id: ""
	I1004 04:25:48.952105   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.952116   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:48.952122   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:48.952177   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:48.986097   67282 cri.go:89] found id: ""
	I1004 04:25:48.986124   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.986135   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:48.986143   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:48.986210   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:49.020400   67282 cri.go:89] found id: ""
	I1004 04:25:49.020428   67282 logs.go:282] 0 containers: []
	W1004 04:25:49.020436   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:49.020445   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:49.020462   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:49.074724   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:49.074754   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:49.088504   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:49.088529   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:49.165940   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:49.165961   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:49.165972   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:49.244482   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:49.244519   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:51.786086   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:51.800644   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:51.800720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:51.839951   67282 cri.go:89] found id: ""
	I1004 04:25:51.839980   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.839990   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:51.839997   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:51.840055   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:51.878660   67282 cri.go:89] found id: ""
	I1004 04:25:51.878684   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.878695   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:51.878701   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:51.878762   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:51.916640   67282 cri.go:89] found id: ""
	I1004 04:25:51.916665   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.916672   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:51.916678   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:51.916725   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:51.953800   67282 cri.go:89] found id: ""
	I1004 04:25:51.953827   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.953835   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:51.953840   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:51.953897   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:51.993107   67282 cri.go:89] found id: ""
	I1004 04:25:51.993139   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.993150   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:51.993157   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:51.993214   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:52.027426   67282 cri.go:89] found id: ""
	I1004 04:25:52.027454   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.027464   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:52.027470   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:52.027521   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:52.063608   67282 cri.go:89] found id: ""
	I1004 04:25:52.063638   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.063650   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:52.063657   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:52.063717   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:52.100052   67282 cri.go:89] found id: ""
	I1004 04:25:52.100083   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.100094   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:52.100106   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:52.100125   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:52.113801   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:52.113827   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:52.201284   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:52.201311   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:52.201322   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:52.280014   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:52.280047   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:52.318120   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:52.318145   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:51.048719   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:53.050304   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:54.146697   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:56.147015   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:58.148736   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:55.122546   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:57.123051   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:54.872245   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:54.886914   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:54.886990   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:54.927117   67282 cri.go:89] found id: ""
	I1004 04:25:54.927144   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.927152   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:54.927157   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:54.927205   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:54.962510   67282 cri.go:89] found id: ""
	I1004 04:25:54.962540   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.962552   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:54.962559   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:54.962619   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:54.996812   67282 cri.go:89] found id: ""
	I1004 04:25:54.996839   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.996848   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:54.996854   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:54.996905   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:55.034557   67282 cri.go:89] found id: ""
	I1004 04:25:55.034587   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.034597   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:55.034605   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:55.034667   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:55.072383   67282 cri.go:89] found id: ""
	I1004 04:25:55.072416   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.072427   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:55.072434   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:55.072494   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:55.121561   67282 cri.go:89] found id: ""
	I1004 04:25:55.121588   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.121598   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:55.121604   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:55.121775   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:55.165525   67282 cri.go:89] found id: ""
	I1004 04:25:55.165553   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.165564   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:55.165570   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:55.165627   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:55.201808   67282 cri.go:89] found id: ""
	I1004 04:25:55.201836   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.201846   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:55.201857   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:55.201870   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:55.280889   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:55.280917   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:55.280932   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:55.354979   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:55.355012   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:55.397144   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:55.397174   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:55.448710   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:55.448746   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:57.963840   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:57.977027   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:57.977085   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:58.019244   67282 cri.go:89] found id: ""
	I1004 04:25:58.019273   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.019285   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:58.019293   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:58.019351   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:58.057979   67282 cri.go:89] found id: ""
	I1004 04:25:58.058008   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.058018   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:58.058027   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:58.058084   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:58.094607   67282 cri.go:89] found id: ""
	I1004 04:25:58.094639   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.094652   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:58.094658   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:58.094726   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:58.130150   67282 cri.go:89] found id: ""
	I1004 04:25:58.130177   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.130188   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:58.130196   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:58.130259   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:58.167662   67282 cri.go:89] found id: ""
	I1004 04:25:58.167691   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.167701   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:58.167709   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:58.167769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:58.203480   67282 cri.go:89] found id: ""
	I1004 04:25:58.203568   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.203585   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:58.203594   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:58.203662   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:58.239516   67282 cri.go:89] found id: ""
	I1004 04:25:58.239537   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.239545   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:58.239551   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:58.239595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:58.275525   67282 cri.go:89] found id: ""
	I1004 04:25:58.275553   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.275564   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:58.275574   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:58.275587   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:58.331191   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:58.331224   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:58.345629   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:58.345659   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:58.416297   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:58.416315   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:58.416326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:58.490659   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:58.490694   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:55.548913   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:57.549457   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:00.647858   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.146570   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:59.623396   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.624074   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.030058   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:01.044568   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:01.044659   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:01.082652   67282 cri.go:89] found id: ""
	I1004 04:26:01.082679   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.082688   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:01.082694   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:01.082750   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:01.120781   67282 cri.go:89] found id: ""
	I1004 04:26:01.120805   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.120814   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:01.120821   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:01.120878   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:01.159494   67282 cri.go:89] found id: ""
	I1004 04:26:01.159523   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.159531   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:01.159537   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:01.159584   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:01.195482   67282 cri.go:89] found id: ""
	I1004 04:26:01.195512   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.195521   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:01.195529   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:01.195589   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:01.233971   67282 cri.go:89] found id: ""
	I1004 04:26:01.233996   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.234006   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:01.234014   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:01.234076   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:01.275935   67282 cri.go:89] found id: ""
	I1004 04:26:01.275958   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.275966   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:01.275971   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:01.276018   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:01.315512   67282 cri.go:89] found id: ""
	I1004 04:26:01.315535   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.315543   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:01.315548   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:01.315603   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:01.356465   67282 cri.go:89] found id: ""
	I1004 04:26:01.356491   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.356505   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:01.356513   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:01.356523   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:01.409237   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:01.409280   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:01.423426   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:01.423453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:01.501372   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:01.501397   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:01.501413   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:01.591087   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:01.591131   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:59.549485   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.550138   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.550258   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:05.646818   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:07.647322   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.634636   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:06.122840   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:04.152506   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:04.166847   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:04.166911   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:04.203138   67282 cri.go:89] found id: ""
	I1004 04:26:04.203167   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.203177   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:04.203184   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:04.203243   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:04.237427   67282 cri.go:89] found id: ""
	I1004 04:26:04.237453   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.237464   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:04.237471   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:04.237525   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:04.272468   67282 cri.go:89] found id: ""
	I1004 04:26:04.272499   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.272511   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:04.272518   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:04.272584   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:04.307347   67282 cri.go:89] found id: ""
	I1004 04:26:04.307373   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.307384   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:04.307390   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:04.307448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:04.342450   67282 cri.go:89] found id: ""
	I1004 04:26:04.342487   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.342498   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:04.342506   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:04.342568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:04.382846   67282 cri.go:89] found id: ""
	I1004 04:26:04.382874   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.382885   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:04.382893   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:04.382945   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:04.418234   67282 cri.go:89] found id: ""
	I1004 04:26:04.418260   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.418268   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:04.418273   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:04.418328   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:04.453433   67282 cri.go:89] found id: ""
	I1004 04:26:04.453456   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.453464   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:04.453473   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:04.453487   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:04.502093   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:04.502123   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:04.515865   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:04.515897   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:04.595672   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:04.595698   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:04.595713   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:04.675273   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:04.675304   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:07.214965   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:07.229495   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:07.229568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:07.268541   67282 cri.go:89] found id: ""
	I1004 04:26:07.268580   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.268591   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:07.268599   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:07.268662   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:07.321382   67282 cri.go:89] found id: ""
	I1004 04:26:07.321414   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.321424   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:07.321431   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:07.321490   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:07.379840   67282 cri.go:89] found id: ""
	I1004 04:26:07.379869   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.379878   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:07.379884   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:07.379928   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:07.431304   67282 cri.go:89] found id: ""
	I1004 04:26:07.431333   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.431343   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:07.431349   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:07.431407   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:07.466853   67282 cri.go:89] found id: ""
	I1004 04:26:07.466880   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.466888   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:07.466893   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:07.466951   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:07.501587   67282 cri.go:89] found id: ""
	I1004 04:26:07.501613   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.501624   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:07.501630   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:07.501685   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:07.536326   67282 cri.go:89] found id: ""
	I1004 04:26:07.536354   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.536364   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:07.536371   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:07.536426   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:07.575257   67282 cri.go:89] found id: ""
	I1004 04:26:07.575283   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.575292   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:07.575299   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:07.575310   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:07.629477   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:07.629515   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:07.643294   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:07.643326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:07.720324   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:07.720350   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:07.720365   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:07.797641   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:07.797678   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:06.049580   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:08.548786   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.146544   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:12.146842   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:08.622497   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.622759   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:12.624285   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.339392   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:10.353341   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:10.353397   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:10.391023   67282 cri.go:89] found id: ""
	I1004 04:26:10.391049   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.391059   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:10.391066   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:10.391129   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:10.424345   67282 cri.go:89] found id: ""
	I1004 04:26:10.424376   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.424388   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:10.424396   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:10.424466   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:10.459344   67282 cri.go:89] found id: ""
	I1004 04:26:10.459374   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.459387   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:10.459394   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:10.459451   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:10.494898   67282 cri.go:89] found id: ""
	I1004 04:26:10.494921   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.494929   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:10.494935   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:10.494982   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:10.531084   67282 cri.go:89] found id: ""
	I1004 04:26:10.531111   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.531122   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:10.531129   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:10.531185   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:10.566918   67282 cri.go:89] found id: ""
	I1004 04:26:10.566949   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.566960   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:10.566967   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:10.567024   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:10.604888   67282 cri.go:89] found id: ""
	I1004 04:26:10.604923   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.604935   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:10.604942   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:10.605013   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:10.641578   67282 cri.go:89] found id: ""
	I1004 04:26:10.641606   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.641620   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:10.641631   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:10.641648   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:10.696848   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:10.696882   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:10.710393   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:10.710417   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:10.780854   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:10.780881   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:10.780895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:10.861732   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:10.861771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:13.403231   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:13.417246   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:13.417319   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:13.451581   67282 cri.go:89] found id: ""
	I1004 04:26:13.451607   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.451616   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:13.451621   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:13.451681   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:13.488362   67282 cri.go:89] found id: ""
	I1004 04:26:13.488388   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.488396   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:13.488401   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:13.488449   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:10.549905   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:13.048997   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:14.646627   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:16.647879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:15.123067   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:17.622729   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:13.522697   67282 cri.go:89] found id: ""
	I1004 04:26:13.522729   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.522740   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:13.522751   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:13.522803   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:13.564926   67282 cri.go:89] found id: ""
	I1004 04:26:13.564959   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.564972   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:13.564981   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:13.565058   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:13.600582   67282 cri.go:89] found id: ""
	I1004 04:26:13.600612   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.600622   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:13.600630   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:13.600688   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:13.634550   67282 cri.go:89] found id: ""
	I1004 04:26:13.634575   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.634584   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:13.634591   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:13.634646   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:13.669281   67282 cri.go:89] found id: ""
	I1004 04:26:13.669311   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.669320   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:13.669326   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:13.669388   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:13.707664   67282 cri.go:89] found id: ""
	I1004 04:26:13.707693   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.707703   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:13.707713   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:13.707727   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:13.721127   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:13.721168   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:13.788026   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:13.788051   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:13.788067   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:13.864505   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:13.864542   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:13.902896   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:13.902921   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:16.456813   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:16.470071   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:16.470138   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:16.506085   67282 cri.go:89] found id: ""
	I1004 04:26:16.506114   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.506125   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:16.506133   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:16.506189   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:16.540016   67282 cri.go:89] found id: ""
	I1004 04:26:16.540044   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.540052   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:16.540056   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:16.540100   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:16.579247   67282 cri.go:89] found id: ""
	I1004 04:26:16.579272   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.579280   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:16.579285   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:16.579332   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:16.615552   67282 cri.go:89] found id: ""
	I1004 04:26:16.615579   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.615601   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:16.615621   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:16.615675   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:16.652639   67282 cri.go:89] found id: ""
	I1004 04:26:16.652660   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.652671   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:16.652678   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:16.652732   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:16.689607   67282 cri.go:89] found id: ""
	I1004 04:26:16.689631   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.689643   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:16.689650   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:16.689720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:16.724430   67282 cri.go:89] found id: ""
	I1004 04:26:16.724458   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.724469   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:16.724475   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:16.724534   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:16.758378   67282 cri.go:89] found id: ""
	I1004 04:26:16.758412   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.758423   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:16.758434   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:16.758454   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:16.826234   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:16.826259   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:16.826273   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:16.906908   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:16.906945   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:16.950295   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:16.950321   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:17.002216   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:17.002253   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:15.549441   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:17.549816   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.147105   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:21.147403   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.622982   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:21.624073   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.516253   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:19.529664   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:19.529726   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:19.566669   67282 cri.go:89] found id: ""
	I1004 04:26:19.566700   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.566711   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:19.566718   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:19.566772   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:19.605923   67282 cri.go:89] found id: ""
	I1004 04:26:19.605951   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.605961   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:19.605968   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:19.606025   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:19.645132   67282 cri.go:89] found id: ""
	I1004 04:26:19.645158   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.645168   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:19.645175   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:19.645235   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:19.687135   67282 cri.go:89] found id: ""
	I1004 04:26:19.687160   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.687171   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:19.687178   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:19.687256   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:19.724180   67282 cri.go:89] found id: ""
	I1004 04:26:19.724213   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.724224   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:19.724230   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:19.724295   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:19.761608   67282 cri.go:89] found id: ""
	I1004 04:26:19.761638   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.761649   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:19.761656   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:19.761714   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:19.795060   67282 cri.go:89] found id: ""
	I1004 04:26:19.795089   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.795099   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:19.795106   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:19.795164   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:19.835678   67282 cri.go:89] found id: ""
	I1004 04:26:19.835703   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.835712   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:19.835722   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:19.835736   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:19.889508   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:19.889543   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:19.903206   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:19.903233   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:19.973445   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:19.973471   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:19.973485   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:20.053996   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:20.054034   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:22.594171   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:22.609084   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:22.609145   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:22.650423   67282 cri.go:89] found id: ""
	I1004 04:26:22.650449   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.650459   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:22.650466   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:22.650525   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:22.686420   67282 cri.go:89] found id: ""
	I1004 04:26:22.686450   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.686461   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:22.686469   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:22.686535   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:22.721385   67282 cri.go:89] found id: ""
	I1004 04:26:22.721408   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.721416   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:22.721421   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:22.721484   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:22.765461   67282 cri.go:89] found id: ""
	I1004 04:26:22.765492   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.765504   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:22.765511   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:22.765569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:22.798192   67282 cri.go:89] found id: ""
	I1004 04:26:22.798220   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.798230   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:22.798235   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:22.798293   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:22.833110   67282 cri.go:89] found id: ""
	I1004 04:26:22.833138   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.833147   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:22.833153   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:22.833212   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:22.875653   67282 cri.go:89] found id: ""
	I1004 04:26:22.875684   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.875696   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:22.875704   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:22.875766   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:22.913906   67282 cri.go:89] found id: ""
	I1004 04:26:22.913931   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.913938   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:22.913946   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:22.913957   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:22.969480   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:22.969511   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:22.983475   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:22.983500   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:23.059953   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:23.059982   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:23.059996   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:23.139106   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:23.139134   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:19.550307   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:22.048618   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:23.647507   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:26.147135   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:24.122370   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:26.122976   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:25.678489   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:25.692648   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:25.692705   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:25.728232   67282 cri.go:89] found id: ""
	I1004 04:26:25.728261   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.728269   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:25.728276   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:25.728335   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:25.763956   67282 cri.go:89] found id: ""
	I1004 04:26:25.763982   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.763991   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:25.763998   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:25.764057   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:25.799715   67282 cri.go:89] found id: ""
	I1004 04:26:25.799743   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.799753   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:25.799761   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:25.799840   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:25.834823   67282 cri.go:89] found id: ""
	I1004 04:26:25.834855   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.834866   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:25.834873   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:25.834933   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:25.869194   67282 cri.go:89] found id: ""
	I1004 04:26:25.869224   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.869235   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:25.869242   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:25.869303   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:25.903514   67282 cri.go:89] found id: ""
	I1004 04:26:25.903543   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.903553   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:25.903558   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:25.903606   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:25.939887   67282 cri.go:89] found id: ""
	I1004 04:26:25.939919   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.939930   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:25.939938   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:25.939996   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:25.981922   67282 cri.go:89] found id: ""
	I1004 04:26:25.981944   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.981952   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:25.981960   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:25.981971   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:26.064860   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:26.064891   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:26.105272   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:26.105296   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:26.162602   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:26.162640   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:26.176408   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:26.176439   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:26.242264   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:24.549364   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:27.049470   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.646788   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.146205   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:33.146879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.622691   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.122181   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:33.123226   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.742417   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:28.755655   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:28.755723   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:28.789338   67282 cri.go:89] found id: ""
	I1004 04:26:28.789361   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.789369   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:28.789374   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:28.789420   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:28.823513   67282 cri.go:89] found id: ""
	I1004 04:26:28.823544   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.823555   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:28.823562   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:28.823619   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:28.858826   67282 cri.go:89] found id: ""
	I1004 04:26:28.858854   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.858866   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:28.858873   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:28.858927   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:28.892552   67282 cri.go:89] found id: ""
	I1004 04:26:28.892579   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.892587   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:28.892593   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:28.892639   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:28.929250   67282 cri.go:89] found id: ""
	I1004 04:26:28.929277   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.929284   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:28.929289   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:28.929335   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:28.966554   67282 cri.go:89] found id: ""
	I1004 04:26:28.966581   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.966589   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:28.966594   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:28.966642   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:28.999930   67282 cri.go:89] found id: ""
	I1004 04:26:28.999954   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.999964   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:28.999970   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:29.000025   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:29.033687   67282 cri.go:89] found id: ""
	I1004 04:26:29.033717   67282 logs.go:282] 0 containers: []
	W1004 04:26:29.033727   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:29.033737   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:29.033752   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:29.109486   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:29.109523   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:29.149125   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:29.149152   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:29.197830   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:29.197861   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:29.211182   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:29.211204   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:29.276808   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:31.777659   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:31.791374   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:31.791425   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:31.825453   67282 cri.go:89] found id: ""
	I1004 04:26:31.825480   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.825489   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:31.825495   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:31.825553   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:31.857845   67282 cri.go:89] found id: ""
	I1004 04:26:31.857875   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.857884   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:31.857893   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:31.857949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:31.892282   67282 cri.go:89] found id: ""
	I1004 04:26:31.892309   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.892317   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:31.892322   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:31.892366   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:31.926016   67282 cri.go:89] found id: ""
	I1004 04:26:31.926037   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.926045   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:31.926051   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:31.926094   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:31.961382   67282 cri.go:89] found id: ""
	I1004 04:26:31.961415   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.961425   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:31.961433   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:31.961492   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:31.994570   67282 cri.go:89] found id: ""
	I1004 04:26:31.994602   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.994613   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:31.994620   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:31.994675   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:32.027359   67282 cri.go:89] found id: ""
	I1004 04:26:32.027383   67282 logs.go:282] 0 containers: []
	W1004 04:26:32.027391   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:32.027397   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:32.027448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:32.063518   67282 cri.go:89] found id: ""
	I1004 04:26:32.063545   67282 logs.go:282] 0 containers: []
	W1004 04:26:32.063555   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:32.063565   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:32.063577   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:32.151555   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:32.151582   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:32.190678   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:32.190700   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:32.243567   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:32.243596   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:32.256293   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:32.256320   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:32.329513   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:29.548687   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.550184   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:34.050659   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:35.147870   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:37.646571   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:35.623302   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:38.122555   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:34.830126   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:34.844760   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:34.844833   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:34.878409   67282 cri.go:89] found id: ""
	I1004 04:26:34.878433   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.878440   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:34.878445   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:34.878500   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:34.916493   67282 cri.go:89] found id: ""
	I1004 04:26:34.916516   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.916524   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:34.916532   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:34.916577   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:34.954532   67282 cri.go:89] found id: ""
	I1004 04:26:34.954556   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.954565   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:34.954570   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:34.954616   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:34.987163   67282 cri.go:89] found id: ""
	I1004 04:26:34.987190   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.987198   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:34.987205   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:34.987261   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:35.021351   67282 cri.go:89] found id: ""
	I1004 04:26:35.021379   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.021388   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:35.021394   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:35.021452   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:35.056350   67282 cri.go:89] found id: ""
	I1004 04:26:35.056376   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.056384   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:35.056390   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:35.056448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:35.093375   67282 cri.go:89] found id: ""
	I1004 04:26:35.093402   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.093412   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:35.093420   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:35.093486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:35.130509   67282 cri.go:89] found id: ""
	I1004 04:26:35.130532   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.130541   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:35.130549   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:35.130562   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:35.188138   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:35.188174   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:35.202226   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:35.202267   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:35.276652   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:35.276675   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:35.276688   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:35.357339   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:35.357373   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:37.898166   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:37.911319   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:37.911387   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:37.944551   67282 cri.go:89] found id: ""
	I1004 04:26:37.944578   67282 logs.go:282] 0 containers: []
	W1004 04:26:37.944590   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:37.944597   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:37.944652   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:37.978066   67282 cri.go:89] found id: ""
	I1004 04:26:37.978093   67282 logs.go:282] 0 containers: []
	W1004 04:26:37.978101   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:37.978107   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:37.978163   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:38.011065   67282 cri.go:89] found id: ""
	I1004 04:26:38.011095   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.011104   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:38.011109   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:38.011156   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:38.050323   67282 cri.go:89] found id: ""
	I1004 04:26:38.050349   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.050359   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:38.050366   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:38.050425   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:38.089141   67282 cri.go:89] found id: ""
	I1004 04:26:38.089169   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.089177   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:38.089182   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:38.089258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:38.122625   67282 cri.go:89] found id: ""
	I1004 04:26:38.122653   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.122663   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:38.122671   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:38.122719   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:38.159957   67282 cri.go:89] found id: ""
	I1004 04:26:38.159982   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.159990   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:38.159996   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:38.160085   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:38.194592   67282 cri.go:89] found id: ""
	I1004 04:26:38.194618   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.194626   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:38.194646   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:38.194657   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:38.263914   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:38.263945   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:38.263958   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:38.339864   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:38.339895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:38.375477   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:38.375505   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:38.428292   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:38.428320   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:36.050815   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:38.548602   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:39.646794   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.146914   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:40.123280   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.623659   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:40.941910   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:40.955041   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:40.955117   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:40.991278   67282 cri.go:89] found id: ""
	I1004 04:26:40.991307   67282 logs.go:282] 0 containers: []
	W1004 04:26:40.991317   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:40.991325   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:40.991389   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:41.025347   67282 cri.go:89] found id: ""
	I1004 04:26:41.025373   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.025385   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:41.025392   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:41.025450   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:41.060974   67282 cri.go:89] found id: ""
	I1004 04:26:41.061001   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.061019   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:41.061026   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:41.061087   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:41.097557   67282 cri.go:89] found id: ""
	I1004 04:26:41.097587   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.097598   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:41.097605   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:41.097665   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:41.136371   67282 cri.go:89] found id: ""
	I1004 04:26:41.136396   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.136405   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:41.136412   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:41.136472   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:41.172590   67282 cri.go:89] found id: ""
	I1004 04:26:41.172617   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.172627   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:41.172634   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:41.172687   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:41.209124   67282 cri.go:89] found id: ""
	I1004 04:26:41.209146   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.209154   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:41.209159   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:41.209214   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:41.250654   67282 cri.go:89] found id: ""
	I1004 04:26:41.250687   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.250699   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:41.250709   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:41.250723   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:41.305814   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:41.305864   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:41.322961   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:41.322989   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:41.427611   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:41.427632   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:41.427648   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:41.505830   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:41.505877   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:40.549691   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.549838   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:44.647149   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.146894   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:45.122344   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.122706   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:44.050902   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:44.065277   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:44.065343   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:44.101089   67282 cri.go:89] found id: ""
	I1004 04:26:44.101110   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.101117   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:44.101123   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:44.101174   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:44.138570   67282 cri.go:89] found id: ""
	I1004 04:26:44.138593   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.138601   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:44.138606   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:44.138650   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:44.178423   67282 cri.go:89] found id: ""
	I1004 04:26:44.178456   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.178478   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:44.178486   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:44.178556   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:44.213301   67282 cri.go:89] found id: ""
	I1004 04:26:44.213330   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.213338   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:44.213344   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:44.213401   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:44.247653   67282 cri.go:89] found id: ""
	I1004 04:26:44.247681   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.247688   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:44.247694   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:44.247756   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:44.281667   67282 cri.go:89] found id: ""
	I1004 04:26:44.281693   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.281704   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:44.281711   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:44.281767   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:44.314637   67282 cri.go:89] found id: ""
	I1004 04:26:44.314667   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.314677   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:44.314684   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:44.314760   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:44.349432   67282 cri.go:89] found id: ""
	I1004 04:26:44.349459   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.349469   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:44.349479   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:44.349492   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:44.397134   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:44.397168   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:44.410708   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:44.410738   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:44.482025   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:44.482049   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:44.482065   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:44.562652   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:44.562699   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:47.101459   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:47.116923   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:47.117020   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:47.153495   67282 cri.go:89] found id: ""
	I1004 04:26:47.153524   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.153534   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:47.153541   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:47.153601   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:47.189976   67282 cri.go:89] found id: ""
	I1004 04:26:47.190004   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.190014   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:47.190023   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:47.190084   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:47.225712   67282 cri.go:89] found id: ""
	I1004 04:26:47.225740   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.225748   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:47.225754   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:47.225800   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:47.261565   67282 cri.go:89] found id: ""
	I1004 04:26:47.261593   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.261603   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:47.261608   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:47.261665   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:47.298152   67282 cri.go:89] found id: ""
	I1004 04:26:47.298204   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.298214   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:47.298223   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:47.298279   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:47.338226   67282 cri.go:89] found id: ""
	I1004 04:26:47.338253   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.338261   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:47.338267   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:47.338320   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:47.378859   67282 cri.go:89] found id: ""
	I1004 04:26:47.378892   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.378902   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:47.378909   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:47.378964   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:47.418161   67282 cri.go:89] found id: ""
	I1004 04:26:47.418186   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.418194   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:47.418203   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:47.418213   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:47.470271   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:47.470311   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:47.484416   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:47.484453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:47.556744   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:47.556767   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:47.556778   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:47.634266   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:47.634299   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:45.050501   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.550072   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:49.147562   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:51.648504   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:49.623375   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:52.122346   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:50.175746   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:50.191850   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:50.191945   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:50.229542   67282 cri.go:89] found id: ""
	I1004 04:26:50.229574   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.229584   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:50.229593   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:50.229655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:50.268401   67282 cri.go:89] found id: ""
	I1004 04:26:50.268432   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.268441   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:50.268449   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:50.268522   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:50.302927   67282 cri.go:89] found id: ""
	I1004 04:26:50.302954   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.302964   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:50.302969   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:50.303029   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:50.336617   67282 cri.go:89] found id: ""
	I1004 04:26:50.336646   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.336656   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:50.336663   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:50.336724   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:50.372871   67282 cri.go:89] found id: ""
	I1004 04:26:50.372901   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.372911   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:50.372918   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:50.372977   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:50.409601   67282 cri.go:89] found id: ""
	I1004 04:26:50.409629   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.409640   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:50.409648   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:50.409723   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:50.451899   67282 cri.go:89] found id: ""
	I1004 04:26:50.451927   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.451935   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:50.451940   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:50.451991   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:50.487306   67282 cri.go:89] found id: ""
	I1004 04:26:50.487332   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.487343   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:50.487353   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:50.487369   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:50.565167   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:50.565192   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:50.565207   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:50.646155   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:50.646194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:50.688459   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:50.688489   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:50.742416   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:50.742460   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:53.257063   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:53.270546   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:53.270618   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:53.306504   67282 cri.go:89] found id: ""
	I1004 04:26:53.306530   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.306538   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:53.306544   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:53.306594   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:53.343256   67282 cri.go:89] found id: ""
	I1004 04:26:53.343285   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.343293   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:53.343299   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:53.343352   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:53.380834   67282 cri.go:89] found id: ""
	I1004 04:26:53.380864   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.380873   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:53.380880   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:53.380940   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:53.417361   67282 cri.go:89] found id: ""
	I1004 04:26:53.417391   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.417404   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:53.417415   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:53.417479   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:53.451948   67282 cri.go:89] found id: ""
	I1004 04:26:53.451970   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.451978   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:53.451983   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:53.452039   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:53.487731   67282 cri.go:89] found id: ""
	I1004 04:26:53.487756   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.487764   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:53.487769   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:53.487836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:50.049952   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:52.050275   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:54.151420   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.647593   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:54.122386   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.623398   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:53.531549   67282 cri.go:89] found id: ""
	I1004 04:26:53.531573   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.531582   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:53.531587   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:53.531643   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:53.578123   67282 cri.go:89] found id: ""
	I1004 04:26:53.578151   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.578162   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:53.578180   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:53.578195   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:53.643062   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:53.643093   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:53.696157   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:53.696194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:53.709884   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:53.709910   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:53.791272   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:53.791297   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:53.791314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:56.371608   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:56.386293   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:56.386376   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:56.425531   67282 cri.go:89] found id: ""
	I1004 04:26:56.425560   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.425571   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:56.425578   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:56.425646   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:56.470293   67282 cri.go:89] found id: ""
	I1004 04:26:56.470326   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.470335   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:56.470340   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:56.470400   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:56.508927   67282 cri.go:89] found id: ""
	I1004 04:26:56.508955   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.508963   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:56.508968   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:56.509018   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:56.549149   67282 cri.go:89] found id: ""
	I1004 04:26:56.549178   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.549191   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:56.549199   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:56.549270   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:56.589412   67282 cri.go:89] found id: ""
	I1004 04:26:56.589441   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.589451   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:56.589459   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:56.589517   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:56.624732   67282 cri.go:89] found id: ""
	I1004 04:26:56.624760   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.624770   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:56.624776   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:56.624838   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:56.662385   67282 cri.go:89] found id: ""
	I1004 04:26:56.662413   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.662421   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:56.662427   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:56.662483   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:56.697982   67282 cri.go:89] found id: ""
	I1004 04:26:56.698014   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.698025   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:56.698036   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:56.698049   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:56.750597   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:56.750633   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:56.764884   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:56.764921   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:56.844404   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:56.844433   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:56.844451   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:56.924373   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:56.924406   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:54.548706   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.549763   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.049294   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:58.648470   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:01.146948   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.148357   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.123321   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:01.622391   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.466449   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:59.481897   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:59.481972   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:59.535384   67282 cri.go:89] found id: ""
	I1004 04:26:59.535411   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.535422   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:59.535428   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:59.535486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:59.595843   67282 cri.go:89] found id: ""
	I1004 04:26:59.595875   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.595886   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:59.595894   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:59.595954   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:59.641010   67282 cri.go:89] found id: ""
	I1004 04:26:59.641041   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.641049   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:59.641057   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:59.641102   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:59.679705   67282 cri.go:89] found id: ""
	I1004 04:26:59.679736   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.679746   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:59.679753   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:59.679828   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:59.715960   67282 cri.go:89] found id: ""
	I1004 04:26:59.715985   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.715993   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:59.715998   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:59.716047   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:59.757406   67282 cri.go:89] found id: ""
	I1004 04:26:59.757442   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.757453   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:59.757461   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:59.757528   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:59.792038   67282 cri.go:89] found id: ""
	I1004 04:26:59.792066   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.792076   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:59.792083   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:59.792141   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:59.830258   67282 cri.go:89] found id: ""
	I1004 04:26:59.830281   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.830289   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:59.830296   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:59.830308   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:59.877273   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:59.877304   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:59.932570   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:59.932610   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:59.945896   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:59.945919   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:00.020363   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:00.020392   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:00.020412   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:02.601022   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:02.615039   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:02.615112   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:02.654541   67282 cri.go:89] found id: ""
	I1004 04:27:02.654567   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.654574   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:02.654579   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:02.654638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:02.691313   67282 cri.go:89] found id: ""
	I1004 04:27:02.691338   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.691349   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:02.691355   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:02.691414   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:02.735337   67282 cri.go:89] found id: ""
	I1004 04:27:02.735367   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.735376   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:02.735383   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:02.735486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:02.769604   67282 cri.go:89] found id: ""
	I1004 04:27:02.769628   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.769638   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:02.769643   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:02.769704   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:02.812913   67282 cri.go:89] found id: ""
	I1004 04:27:02.812938   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.812949   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:02.812954   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:02.813020   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:02.849910   67282 cri.go:89] found id: ""
	I1004 04:27:02.849939   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.849949   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:02.849956   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:02.850023   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:02.889467   67282 cri.go:89] found id: ""
	I1004 04:27:02.889497   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.889509   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:02.889517   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:02.889575   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:02.928508   67282 cri.go:89] found id: ""
	I1004 04:27:02.928529   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.928537   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:02.928545   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:02.928556   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:02.942783   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:02.942821   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:03.018282   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:03.018304   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:03.018314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:03.101588   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:03.101622   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:03.149911   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:03.149937   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:01.051581   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.550066   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.646200   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:07.648479   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.622932   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.623005   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.121151   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.703125   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:05.717243   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:05.717303   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:05.752564   67282 cri.go:89] found id: ""
	I1004 04:27:05.752588   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.752597   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:05.752609   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:05.752656   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:05.786955   67282 cri.go:89] found id: ""
	I1004 04:27:05.786983   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.786994   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:05.787001   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:05.787073   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:05.823848   67282 cri.go:89] found id: ""
	I1004 04:27:05.823882   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.823893   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:05.823901   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:05.823970   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:05.866192   67282 cri.go:89] found id: ""
	I1004 04:27:05.866220   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.866238   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:05.866246   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:05.866305   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:05.904051   67282 cri.go:89] found id: ""
	I1004 04:27:05.904078   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.904089   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:05.904096   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:05.904154   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:05.940041   67282 cri.go:89] found id: ""
	I1004 04:27:05.940075   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.940085   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:05.940092   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:05.940158   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:05.975758   67282 cri.go:89] found id: ""
	I1004 04:27:05.975799   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.975810   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:05.975818   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:05.975892   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:06.011044   67282 cri.go:89] found id: ""
	I1004 04:27:06.011086   67282 logs.go:282] 0 containers: []
	W1004 04:27:06.011096   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:06.011105   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:06.011116   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:06.024900   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:06.024937   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:06.109932   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:06.109960   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:06.109976   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:06.189517   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:06.189557   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:06.230019   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:06.230048   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:06.050004   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.548768   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:10.147814   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.646430   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:10.122097   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.123967   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.785355   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:08.799156   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:08.799218   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:08.843606   67282 cri.go:89] found id: ""
	I1004 04:27:08.843634   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.843643   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:08.843648   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:08.843698   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:08.884418   67282 cri.go:89] found id: ""
	I1004 04:27:08.884443   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.884450   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:08.884456   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:08.884503   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:08.925878   67282 cri.go:89] found id: ""
	I1004 04:27:08.925906   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.925914   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:08.925920   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:08.925970   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:08.966127   67282 cri.go:89] found id: ""
	I1004 04:27:08.966157   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.966167   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:08.966173   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:08.966227   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:09.010646   67282 cri.go:89] found id: ""
	I1004 04:27:09.010672   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.010682   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:09.010702   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:09.010769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:09.049738   67282 cri.go:89] found id: ""
	I1004 04:27:09.049761   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.049768   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:09.049774   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:09.049825   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:09.082709   67282 cri.go:89] found id: ""
	I1004 04:27:09.082739   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.082747   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:09.082752   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:09.082808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:09.120574   67282 cri.go:89] found id: ""
	I1004 04:27:09.120605   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.120617   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:09.120626   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:09.120636   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:09.202880   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:09.202922   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:09.242668   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:09.242700   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:09.298662   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:09.298703   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:09.314832   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:09.314868   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:09.389062   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:11.889645   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:11.902953   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:11.903012   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:11.939846   67282 cri.go:89] found id: ""
	I1004 04:27:11.939874   67282 logs.go:282] 0 containers: []
	W1004 04:27:11.939882   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:11.939888   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:11.939936   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:11.975281   67282 cri.go:89] found id: ""
	I1004 04:27:11.975303   67282 logs.go:282] 0 containers: []
	W1004 04:27:11.975311   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:11.975317   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:11.975370   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:12.011400   67282 cri.go:89] found id: ""
	I1004 04:27:12.011428   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.011438   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:12.011443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:12.011506   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:12.046862   67282 cri.go:89] found id: ""
	I1004 04:27:12.046889   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.046898   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:12.046905   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:12.046960   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:12.081537   67282 cri.go:89] found id: ""
	I1004 04:27:12.081569   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.081581   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:12.081590   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:12.081655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:12.121982   67282 cri.go:89] found id: ""
	I1004 04:27:12.122010   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.122021   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:12.122028   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:12.122086   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:12.161419   67282 cri.go:89] found id: ""
	I1004 04:27:12.161460   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.161473   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:12.161481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:12.161549   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:12.202188   67282 cri.go:89] found id: ""
	I1004 04:27:12.202230   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.202242   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:12.202253   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:12.202267   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:12.253424   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:12.253462   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:12.268116   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:12.268141   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:12.337788   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:12.337814   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:12.337826   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:12.417359   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:12.417395   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:10.549097   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.549239   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.647267   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:17.147526   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.623050   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:16.623702   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.959596   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:14.973031   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:14.973090   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:15.011451   67282 cri.go:89] found id: ""
	I1004 04:27:15.011487   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.011497   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:15.011513   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:15.011572   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:15.055767   67282 cri.go:89] found id: ""
	I1004 04:27:15.055817   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.055829   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:15.055836   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:15.055915   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:15.096357   67282 cri.go:89] found id: ""
	I1004 04:27:15.096385   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.096394   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:15.096399   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:15.096456   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:15.131824   67282 cri.go:89] found id: ""
	I1004 04:27:15.131853   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.131863   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:15.131870   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:15.131932   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:15.169250   67282 cri.go:89] found id: ""
	I1004 04:27:15.169285   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.169299   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:15.169307   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:15.169373   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:15.206852   67282 cri.go:89] found id: ""
	I1004 04:27:15.206881   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.206889   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:15.206895   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:15.206949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:15.241392   67282 cri.go:89] found id: ""
	I1004 04:27:15.241421   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.241431   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:15.241439   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:15.241498   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:15.280697   67282 cri.go:89] found id: ""
	I1004 04:27:15.280723   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.280734   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:15.280744   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:15.280758   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:15.361681   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:15.361716   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:15.404640   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:15.404676   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:15.457287   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:15.457326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:15.471162   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:15.471188   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:15.544157   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:18.045094   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:18.060228   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:18.060310   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:18.096659   67282 cri.go:89] found id: ""
	I1004 04:27:18.096688   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.096697   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:18.096703   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:18.096757   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:18.135538   67282 cri.go:89] found id: ""
	I1004 04:27:18.135565   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.135573   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:18.135579   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:18.135629   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:18.171051   67282 cri.go:89] found id: ""
	I1004 04:27:18.171082   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.171098   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:18.171106   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:18.171168   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:18.205696   67282 cri.go:89] found id: ""
	I1004 04:27:18.205725   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.205735   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:18.205742   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:18.205803   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:18.240545   67282 cri.go:89] found id: ""
	I1004 04:27:18.240566   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.240576   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:18.240584   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:18.240638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:18.279185   67282 cri.go:89] found id: ""
	I1004 04:27:18.279221   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.279232   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:18.279239   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:18.279310   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:18.318395   67282 cri.go:89] found id: ""
	I1004 04:27:18.318417   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.318424   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:18.318430   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:18.318476   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:18.352367   67282 cri.go:89] found id: ""
	I1004 04:27:18.352390   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.352398   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:18.352407   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:18.352420   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:18.365604   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:18.365637   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:18.438407   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:18.438427   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:18.438438   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:14.549690   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:16.550244   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:18.550355   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:19.647031   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:22.147826   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:19.126090   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:21.623910   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:18.513645   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:18.513679   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:18.557224   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:18.557250   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:21.111005   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:21.126573   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:21.126631   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:21.161161   67282 cri.go:89] found id: ""
	I1004 04:27:21.161190   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.161201   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:21.161207   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:21.161258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:21.199517   67282 cri.go:89] found id: ""
	I1004 04:27:21.199544   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.199555   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:21.199562   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:21.199625   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:21.236210   67282 cri.go:89] found id: ""
	I1004 04:27:21.236238   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.236246   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:21.236251   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:21.236311   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:21.272720   67282 cri.go:89] found id: ""
	I1004 04:27:21.272746   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.272753   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:21.272759   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:21.272808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:21.311439   67282 cri.go:89] found id: ""
	I1004 04:27:21.311474   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.311484   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:21.311491   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:21.311551   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:21.360400   67282 cri.go:89] found id: ""
	I1004 04:27:21.360427   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.360436   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:21.360443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:21.360511   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:21.394627   67282 cri.go:89] found id: ""
	I1004 04:27:21.394656   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.394667   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:21.394673   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:21.394721   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:21.429736   67282 cri.go:89] found id: ""
	I1004 04:27:21.429762   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.429770   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:21.429778   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:21.429789   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:21.482773   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:21.482808   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:21.497570   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:21.497595   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:21.582335   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:21.582355   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:21.582367   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:21.662196   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:21.662230   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:21.050000   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:23.050516   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.647074   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:27.147999   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.123142   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:26.624049   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.205743   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:24.222878   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:24.222951   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:24.263410   67282 cri.go:89] found id: ""
	I1004 04:27:24.263450   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.263462   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:24.263469   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:24.263532   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:24.306892   67282 cri.go:89] found id: ""
	I1004 04:27:24.306923   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.306934   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:24.306941   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:24.307008   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:24.345522   67282 cri.go:89] found id: ""
	I1004 04:27:24.345559   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.345571   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:24.345579   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:24.345638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:24.384893   67282 cri.go:89] found id: ""
	I1004 04:27:24.384918   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.384925   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:24.384931   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:24.384978   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:24.420998   67282 cri.go:89] found id: ""
	I1004 04:27:24.421025   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.421036   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:24.421043   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:24.421105   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:24.456277   67282 cri.go:89] found id: ""
	I1004 04:27:24.456305   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.456315   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:24.456322   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:24.456383   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:24.497852   67282 cri.go:89] found id: ""
	I1004 04:27:24.497881   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.497892   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:24.497900   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:24.497960   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:24.538702   67282 cri.go:89] found id: ""
	I1004 04:27:24.538736   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.538755   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:24.538766   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:24.538778   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:24.553747   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:24.553773   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:24.638059   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:24.638081   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:24.638093   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:24.718165   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:24.718212   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:24.759770   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:24.759811   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:27.311684   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:27.327493   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:27.327570   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:27.362804   67282 cri.go:89] found id: ""
	I1004 04:27:27.362827   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.362836   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:27.362841   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:27.362888   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:27.401576   67282 cri.go:89] found id: ""
	I1004 04:27:27.401604   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.401614   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:27.401621   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:27.401682   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:27.445152   67282 cri.go:89] found id: ""
	I1004 04:27:27.445177   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.445187   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:27.445193   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:27.445240   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:27.482710   67282 cri.go:89] found id: ""
	I1004 04:27:27.482734   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.482742   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:27.482749   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:27.482808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:27.519459   67282 cri.go:89] found id: ""
	I1004 04:27:27.519488   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.519498   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:27.519505   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:27.519569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:27.559381   67282 cri.go:89] found id: ""
	I1004 04:27:27.559407   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.559417   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:27.559423   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:27.559468   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:27.609040   67282 cri.go:89] found id: ""
	I1004 04:27:27.609068   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.609076   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:27.609081   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:27.609128   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:27.654537   67282 cri.go:89] found id: ""
	I1004 04:27:27.654569   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.654579   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:27.654590   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:27.654603   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:27.709062   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:27.709098   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:27.722931   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:27.722955   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:27.796863   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:27.796884   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:27.796895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:27.879840   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:27.879876   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:25.549643   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:27.551373   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:29.646879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:31.646956   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:29.122087   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:31.122774   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:30.423644   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:30.439256   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:30.439311   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:30.479612   67282 cri.go:89] found id: ""
	I1004 04:27:30.479640   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.479648   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:30.479654   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:30.479750   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:30.522846   67282 cri.go:89] found id: ""
	I1004 04:27:30.522879   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.522890   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:30.522898   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:30.522946   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:30.558935   67282 cri.go:89] found id: ""
	I1004 04:27:30.558962   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.558971   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:30.558976   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:30.559032   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:30.603383   67282 cri.go:89] found id: ""
	I1004 04:27:30.603411   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.603421   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:30.603428   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:30.603492   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:30.644700   67282 cri.go:89] found id: ""
	I1004 04:27:30.644727   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.644737   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:30.644744   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:30.644799   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:30.680328   67282 cri.go:89] found id: ""
	I1004 04:27:30.680358   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.680367   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:30.680372   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:30.680419   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:30.717973   67282 cri.go:89] found id: ""
	I1004 04:27:30.717995   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.718005   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:30.718021   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:30.718082   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:30.755838   67282 cri.go:89] found id: ""
	I1004 04:27:30.755866   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.755874   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:30.755882   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:30.755893   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:30.809999   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:30.810036   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:30.824447   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:30.824491   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:30.902008   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:30.902030   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:30.902043   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:30.986938   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:30.986984   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:30.049983   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:32.050033   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:34.050671   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.647707   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:36.146619   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.624575   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:36.122046   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.531108   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:33.546681   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:33.546759   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:33.586444   67282 cri.go:89] found id: ""
	I1004 04:27:33.586469   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.586479   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:33.586486   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:33.586552   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:33.629340   67282 cri.go:89] found id: ""
	I1004 04:27:33.629365   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.629373   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:33.629378   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:33.629429   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:33.668446   67282 cri.go:89] found id: ""
	I1004 04:27:33.668473   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.668483   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:33.668490   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:33.668548   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:33.706287   67282 cri.go:89] found id: ""
	I1004 04:27:33.706312   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.706320   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:33.706327   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:33.706385   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:33.746161   67282 cri.go:89] found id: ""
	I1004 04:27:33.746189   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.746200   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:33.746207   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:33.746270   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:33.782157   67282 cri.go:89] found id: ""
	I1004 04:27:33.782184   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.782194   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:33.782200   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:33.782262   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:33.820332   67282 cri.go:89] found id: ""
	I1004 04:27:33.820361   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.820371   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:33.820378   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:33.820437   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:33.859431   67282 cri.go:89] found id: ""
	I1004 04:27:33.859458   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.859467   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:33.859475   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:33.859485   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:33.910259   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:33.910292   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:33.925149   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:33.925177   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:34.006153   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:34.006187   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:34.006202   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:34.115882   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:34.115916   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:36.662964   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:36.677071   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:36.677139   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:36.720785   67282 cri.go:89] found id: ""
	I1004 04:27:36.720807   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.720818   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:36.720826   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:36.720875   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:36.757535   67282 cri.go:89] found id: ""
	I1004 04:27:36.757563   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.757574   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:36.757582   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:36.757630   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:36.800989   67282 cri.go:89] found id: ""
	I1004 04:27:36.801024   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.801038   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:36.801046   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:36.801112   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:36.837101   67282 cri.go:89] found id: ""
	I1004 04:27:36.837122   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.837131   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:36.837136   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:36.837181   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:36.876325   67282 cri.go:89] found id: ""
	I1004 04:27:36.876358   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.876370   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:36.876379   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:36.876444   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:36.914720   67282 cri.go:89] found id: ""
	I1004 04:27:36.914749   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.914759   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:36.914767   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:36.914828   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:36.949672   67282 cri.go:89] found id: ""
	I1004 04:27:36.949694   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.949701   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:36.949706   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:36.949754   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:36.983374   67282 cri.go:89] found id: ""
	I1004 04:27:36.983406   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.983416   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:36.983427   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:36.983440   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:37.039040   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:37.039075   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:37.054873   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:37.054898   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:37.131537   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:37.131562   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:37.131577   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:37.213958   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:37.213990   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:36.548751   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:39.049804   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:38.646028   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:40.646213   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:42.648505   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:38.623560   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:40.623721   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:43.122033   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:39.754264   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:39.771465   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:39.771545   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:39.829530   67282 cri.go:89] found id: ""
	I1004 04:27:39.829560   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.829572   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:39.829580   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:39.829639   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:39.876055   67282 cri.go:89] found id: ""
	I1004 04:27:39.876078   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.876090   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:39.876095   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:39.876142   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:39.913304   67282 cri.go:89] found id: ""
	I1004 04:27:39.913327   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.913335   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:39.913340   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:39.913389   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:39.948821   67282 cri.go:89] found id: ""
	I1004 04:27:39.948847   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.948855   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:39.948862   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:39.948916   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:39.986994   67282 cri.go:89] found id: ""
	I1004 04:27:39.987023   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.987034   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:39.987041   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:39.987141   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:40.026627   67282 cri.go:89] found id: ""
	I1004 04:27:40.026656   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.026668   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:40.026675   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:40.026734   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:40.067028   67282 cri.go:89] found id: ""
	I1004 04:27:40.067068   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.067079   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:40.067086   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:40.067144   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:40.105638   67282 cri.go:89] found id: ""
	I1004 04:27:40.105667   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.105677   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:40.105694   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:40.105707   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:40.159425   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:40.159467   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:40.175045   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:40.175073   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:40.261967   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:40.261989   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:40.262002   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:40.345317   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:40.345354   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:42.888115   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:42.901889   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:42.901948   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:42.938556   67282 cri.go:89] found id: ""
	I1004 04:27:42.938587   67282 logs.go:282] 0 containers: []
	W1004 04:27:42.938597   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:42.938604   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:42.938668   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:42.974569   67282 cri.go:89] found id: ""
	I1004 04:27:42.974595   67282 logs.go:282] 0 containers: []
	W1004 04:27:42.974606   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:42.974613   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:42.974679   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:43.010552   67282 cri.go:89] found id: ""
	I1004 04:27:43.010581   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.010593   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:43.010600   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:43.010655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:43.046204   67282 cri.go:89] found id: ""
	I1004 04:27:43.046237   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.046247   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:43.046254   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:43.046313   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:43.081612   67282 cri.go:89] found id: ""
	I1004 04:27:43.081644   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.081655   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:43.081662   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:43.081729   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:43.121103   67282 cri.go:89] found id: ""
	I1004 04:27:43.121126   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.121133   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:43.121139   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:43.121191   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:43.157104   67282 cri.go:89] found id: ""
	I1004 04:27:43.157128   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.157136   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:43.157141   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:43.157196   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:43.198927   67282 cri.go:89] found id: ""
	I1004 04:27:43.198951   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.198958   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:43.198966   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:43.198975   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:43.254534   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:43.254563   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:43.268106   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:43.268130   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:43.344382   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:43.344410   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:43.344425   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:43.426916   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:43.426948   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:41.549364   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:43.549590   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.146452   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:47.148300   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.126135   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:47.622568   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.966806   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:45.980187   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:45.980252   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:46.014196   67282 cri.go:89] found id: ""
	I1004 04:27:46.014220   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.014228   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:46.014233   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:46.014295   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:46.053910   67282 cri.go:89] found id: ""
	I1004 04:27:46.053940   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.053951   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:46.053957   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:46.054013   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:46.087896   67282 cri.go:89] found id: ""
	I1004 04:27:46.087921   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.087930   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:46.087936   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:46.087985   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:46.123441   67282 cri.go:89] found id: ""
	I1004 04:27:46.123465   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.123475   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:46.123481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:46.123545   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:46.159664   67282 cri.go:89] found id: ""
	I1004 04:27:46.159688   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.159698   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:46.159704   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:46.159761   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:46.195474   67282 cri.go:89] found id: ""
	I1004 04:27:46.195501   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.195512   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:46.195525   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:46.195569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:46.228670   67282 cri.go:89] found id: ""
	I1004 04:27:46.228693   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.228701   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:46.228706   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:46.228759   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:46.265278   67282 cri.go:89] found id: ""
	I1004 04:27:46.265303   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.265311   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:46.265325   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:46.265338   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:46.315135   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:46.315163   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:46.327765   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:46.327797   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:46.393157   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:46.393173   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:46.393184   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:46.473026   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:46.473058   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:46.049285   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:48.549053   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:49.647027   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:52.146841   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:50.122921   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:52.622913   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:49.011972   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:49.025718   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:49.025783   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:49.062749   67282 cri.go:89] found id: ""
	I1004 04:27:49.062774   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.062782   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:49.062788   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:49.062844   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:49.100838   67282 cri.go:89] found id: ""
	I1004 04:27:49.100886   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.100897   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:49.100904   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:49.100961   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:49.139966   67282 cri.go:89] found id: ""
	I1004 04:27:49.139990   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.140000   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:49.140007   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:49.140088   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:49.179347   67282 cri.go:89] found id: ""
	I1004 04:27:49.179373   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.179384   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:49.179391   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:49.179435   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:49.218086   67282 cri.go:89] found id: ""
	I1004 04:27:49.218112   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.218121   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:49.218127   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:49.218181   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:49.254779   67282 cri.go:89] found id: ""
	I1004 04:27:49.254811   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.254823   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:49.254830   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:49.254888   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:49.287351   67282 cri.go:89] found id: ""
	I1004 04:27:49.287381   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.287392   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:49.287398   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:49.287456   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:49.320051   67282 cri.go:89] found id: ""
	I1004 04:27:49.320078   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.320089   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:49.320100   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:49.320112   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:49.371270   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:49.371300   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:49.384403   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:49.384432   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:49.468132   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:49.468154   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:49.468167   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:49.543179   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:49.543211   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:52.093235   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:52.108446   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:52.108520   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:52.147590   67282 cri.go:89] found id: ""
	I1004 04:27:52.147613   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.147620   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:52.147626   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:52.147677   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:52.183066   67282 cri.go:89] found id: ""
	I1004 04:27:52.183095   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.183105   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:52.183112   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:52.183170   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:52.223109   67282 cri.go:89] found id: ""
	I1004 04:27:52.223140   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.223154   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:52.223165   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:52.223223   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:52.259547   67282 cri.go:89] found id: ""
	I1004 04:27:52.259573   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.259582   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:52.259587   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:52.259638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:52.296934   67282 cri.go:89] found id: ""
	I1004 04:27:52.296961   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.296971   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:52.296979   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:52.297040   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:52.331650   67282 cri.go:89] found id: ""
	I1004 04:27:52.331671   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.331679   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:52.331684   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:52.331728   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:52.365111   67282 cri.go:89] found id: ""
	I1004 04:27:52.365139   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.365150   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:52.365157   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:52.365239   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:52.400974   67282 cri.go:89] found id: ""
	I1004 04:27:52.401010   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.401023   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:52.401035   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:52.401049   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:52.484732   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:52.484771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:52.523322   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:52.523348   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:52.576671   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:52.576702   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:52.590263   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:52.590291   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:52.666646   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:50.549475   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:53.049259   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:54.646262   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.153196   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:55.123174   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.123932   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:55.166856   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:55.181481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:55.181562   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:55.218023   67282 cri.go:89] found id: ""
	I1004 04:27:55.218048   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.218056   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:55.218063   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:55.218121   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:55.256439   67282 cri.go:89] found id: ""
	I1004 04:27:55.256464   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.256472   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:55.256477   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:55.256531   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:55.294563   67282 cri.go:89] found id: ""
	I1004 04:27:55.294588   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.294596   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:55.294601   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:55.294656   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:55.331266   67282 cri.go:89] found id: ""
	I1004 04:27:55.331290   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.331300   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:55.331306   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:55.331370   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:55.367286   67282 cri.go:89] found id: ""
	I1004 04:27:55.367314   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.367325   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:55.367332   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:55.367391   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:55.402031   67282 cri.go:89] found id: ""
	I1004 04:27:55.402054   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.402062   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:55.402068   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:55.402122   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:55.437737   67282 cri.go:89] found id: ""
	I1004 04:27:55.437764   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.437774   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:55.437780   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:55.437842   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:55.470654   67282 cri.go:89] found id: ""
	I1004 04:27:55.470692   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.470704   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:55.470713   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:55.470726   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:55.521364   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:55.521393   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:55.534691   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:55.534716   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:55.600902   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:55.600923   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:55.600933   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:55.678896   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:55.678940   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:58.220086   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:58.234049   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:58.234110   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:58.281112   67282 cri.go:89] found id: ""
	I1004 04:27:58.281135   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.281143   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:58.281148   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:58.281191   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:58.320549   67282 cri.go:89] found id: ""
	I1004 04:27:58.320575   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.320584   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:58.320589   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:58.320635   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:58.355139   67282 cri.go:89] found id: ""
	I1004 04:27:58.355166   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.355174   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:58.355179   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:58.355225   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:58.387809   67282 cri.go:89] found id: ""
	I1004 04:27:58.387836   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.387846   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:58.387851   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:58.387908   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:58.420264   67282 cri.go:89] found id: ""
	I1004 04:27:58.420287   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.420295   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:58.420300   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:58.420349   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:58.455409   67282 cri.go:89] found id: ""
	I1004 04:27:58.455431   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.455438   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:58.455443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:58.455487   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:58.488708   67282 cri.go:89] found id: ""
	I1004 04:27:58.488734   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.488742   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:58.488749   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:58.488797   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:55.051622   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.548584   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:59.646699   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:01.648277   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:59.623008   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:02.122303   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:58.522139   67282 cri.go:89] found id: ""
	I1004 04:27:58.522161   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.522169   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:58.522176   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:58.522187   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:58.604653   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:58.604683   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:58.645141   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:58.645169   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:58.699716   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:58.699748   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:58.713197   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:58.713228   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:58.781998   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:01.282429   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:01.297266   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:01.297343   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:01.330421   67282 cri.go:89] found id: ""
	I1004 04:28:01.330446   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.330454   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:28:01.330459   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:01.330514   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:01.366960   67282 cri.go:89] found id: ""
	I1004 04:28:01.366983   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.366992   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:28:01.366998   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:01.367067   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:01.400886   67282 cri.go:89] found id: ""
	I1004 04:28:01.400910   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.400920   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:28:01.400931   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:01.400987   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:01.435556   67282 cri.go:89] found id: ""
	I1004 04:28:01.435586   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.435594   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:28:01.435601   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:01.435649   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:01.475772   67282 cri.go:89] found id: ""
	I1004 04:28:01.475810   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.475820   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:28:01.475826   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:01.475884   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:01.512380   67282 cri.go:89] found id: ""
	I1004 04:28:01.512403   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.512411   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:28:01.512417   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:01.512465   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:01.550488   67282 cri.go:89] found id: ""
	I1004 04:28:01.550517   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.550528   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:01.550536   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:28:01.550595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:28:01.586216   67282 cri.go:89] found id: ""
	I1004 04:28:01.586249   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.586261   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:28:01.586271   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:01.586285   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:01.640819   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:01.640860   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:01.656990   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:01.657020   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:28:01.731326   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:01.731354   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:01.731368   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:01.810007   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:28:01.810044   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:59.548748   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:01.551035   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.043116   66755 pod_ready.go:82] duration metric: took 4m0.000354814s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" ...
	E1004 04:28:04.043143   66755 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" (will not retry!)
	I1004 04:28:04.043167   66755 pod_ready.go:39] duration metric: took 4m15.403862245s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:04.043219   66755 kubeadm.go:597] duration metric: took 4m23.226496183s to restartPrimaryControlPlane
	W1004 04:28:04.043288   66755 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1004 04:28:04.043316   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:28:04.146697   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:06.147038   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:08.147201   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.122463   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:06.622379   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.352648   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:04.366150   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:04.366227   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:04.403272   67282 cri.go:89] found id: ""
	I1004 04:28:04.403298   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.403308   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:28:04.403315   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:04.403371   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:04.439237   67282 cri.go:89] found id: ""
	I1004 04:28:04.439269   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.439280   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:28:04.439287   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:04.439345   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:04.475532   67282 cri.go:89] found id: ""
	I1004 04:28:04.475558   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.475569   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:28:04.475576   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:04.475638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:04.511738   67282 cri.go:89] found id: ""
	I1004 04:28:04.511765   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.511775   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:28:04.511792   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:04.511850   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:04.553536   67282 cri.go:89] found id: ""
	I1004 04:28:04.553561   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.553568   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:28:04.553574   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:04.553625   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:04.589016   67282 cri.go:89] found id: ""
	I1004 04:28:04.589044   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.589053   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:28:04.589058   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:04.589106   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:04.622780   67282 cri.go:89] found id: ""
	I1004 04:28:04.622808   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.622817   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:04.622823   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:28:04.622879   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:28:04.662620   67282 cri.go:89] found id: ""
	I1004 04:28:04.662641   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.662649   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:28:04.662659   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:04.662669   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:04.717894   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:04.717928   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:04.732353   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:04.732385   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:28:04.806443   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:04.806469   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:04.806492   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:04.887684   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:28:04.887717   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:07.426630   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:07.440242   67282 kubeadm.go:597] duration metric: took 4m3.475062199s to restartPrimaryControlPlane
	W1004 04:28:07.440318   67282 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1004 04:28:07.440346   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:28:08.147532   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:08.162175   67282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:28:08.172013   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:28:08.181741   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:28:08.181757   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:28:08.181801   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:28:08.191002   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:28:08.191046   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:28:08.200929   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:28:08.210241   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:28:08.210286   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:28:08.219693   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:28:08.229497   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:28:08.229534   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:28:08.239583   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:28:08.249207   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:28:08.249252   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:28:08.258516   67282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:28:08.328054   67282 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:28:08.328132   67282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:28:08.472265   67282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:28:08.472420   67282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:28:08.472543   67282 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:28:08.655873   67282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:28:08.657726   67282 out.go:235]   - Generating certificates and keys ...
	I1004 04:28:08.657817   67282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:28:08.657876   67282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:28:08.657942   67282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:28:08.658034   67282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:28:08.658149   67282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:28:08.658235   67282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:28:08.658309   67282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:28:08.658396   67282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:28:08.658503   67282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:28:08.658600   67282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:28:08.658651   67282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:28:08.658707   67282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:28:08.706486   67282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:28:08.909036   67282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:28:09.285968   67282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:28:09.499963   67282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:28:09.516914   67282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:28:09.517832   67282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:28:09.517900   67282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:28:09.664925   67282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:28:10.147391   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:12.646012   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:09.121686   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:11.123086   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:13.123578   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:09.666691   67282 out.go:235]   - Booting up control plane ...
	I1004 04:28:09.666889   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:28:09.671298   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:28:09.672046   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:28:09.672956   67282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:28:09.685069   67282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:28:14.646614   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:16.646683   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:15.125374   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:17.125685   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:18.646777   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:21.147299   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:19.623872   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:22.123077   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:23.646460   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:25.647096   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:28.147324   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:24.623730   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:27.123516   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:30.379460   66755 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.336110507s)
	I1004 04:28:30.379544   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:30.395622   66755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:28:30.406790   66755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:28:30.417380   66755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:28:30.417408   66755 kubeadm.go:157] found existing configuration files:
	
	I1004 04:28:30.417458   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:28:30.427925   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:28:30.427993   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:28:30.438694   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:28:30.448898   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:28:30.448972   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:28:30.459463   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:28:30.469227   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:28:30.469281   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:28:30.479979   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:28:30.489873   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:28:30.489936   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:28:30.499999   66755 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:28:30.549707   66755 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 04:28:30.549771   66755 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:28:30.663468   66755 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:28:30.663595   66755 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:28:30.663698   66755 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 04:28:30.675750   66755 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:28:30.677655   66755 out.go:235]   - Generating certificates and keys ...
	I1004 04:28:30.677760   66755 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:28:30.677868   66755 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:28:30.678010   66755 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:28:30.678102   66755 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:28:30.678217   66755 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:28:30.678289   66755 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:28:30.678378   66755 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:28:30.678470   66755 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:28:30.678566   66755 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:28:30.678732   66755 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:28:30.679295   66755 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:28:30.679383   66755 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:28:30.826979   66755 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:28:30.900919   66755 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 04:28:31.098221   66755 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:28:31.243668   66755 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:28:31.411766   66755 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:28:31.412181   66755 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:28:31.414652   66755 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:28:30.646927   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:32.647767   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:29.129148   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:31.623284   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:31.416504   66755 out.go:235]   - Booting up control plane ...
	I1004 04:28:31.416620   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:28:31.416730   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:28:31.418284   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:28:31.437379   66755 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:28:31.443450   66755 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:28:31.443505   66755 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:28:31.586540   66755 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 04:28:31.586706   66755 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 04:28:32.088382   66755 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.195244ms
	I1004 04:28:32.088510   66755 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 04:28:37.090291   66755 kubeadm.go:310] [api-check] The API server is healthy after 5.001756025s
	I1004 04:28:37.103845   66755 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 04:28:37.127230   66755 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 04:28:37.156917   66755 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 04:28:37.157181   66755 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-934812 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 04:28:37.171399   66755 kubeadm.go:310] [bootstrap-token] Using token: 1wt5ey.lvccf2aeyngf9mt3
	I1004 04:28:34.648249   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:37.148680   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:33.623901   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:36.122762   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:38.123147   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:37.172939   66755 out.go:235]   - Configuring RBAC rules ...
	I1004 04:28:37.173086   66755 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 04:28:37.179454   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 04:28:37.188765   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 04:28:37.192599   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 04:28:37.200359   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 04:28:37.204872   66755 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 04:28:37.498753   66755 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 04:28:37.931621   66755 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 04:28:38.497855   66755 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 04:28:38.498949   66755 kubeadm.go:310] 
	I1004 04:28:38.499023   66755 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 04:28:38.499055   66755 kubeadm.go:310] 
	I1004 04:28:38.499183   66755 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 04:28:38.499195   66755 kubeadm.go:310] 
	I1004 04:28:38.499229   66755 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 04:28:38.499316   66755 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 04:28:38.499385   66755 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 04:28:38.499393   66755 kubeadm.go:310] 
	I1004 04:28:38.499481   66755 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 04:28:38.499498   66755 kubeadm.go:310] 
	I1004 04:28:38.499563   66755 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 04:28:38.499571   66755 kubeadm.go:310] 
	I1004 04:28:38.499653   66755 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 04:28:38.499742   66755 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 04:28:38.499871   66755 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 04:28:38.499888   66755 kubeadm.go:310] 
	I1004 04:28:38.499994   66755 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 04:28:38.500104   66755 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 04:28:38.500115   66755 kubeadm.go:310] 
	I1004 04:28:38.500220   66755 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1wt5ey.lvccf2aeyngf9mt3 \
	I1004 04:28:38.500350   66755 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 \
	I1004 04:28:38.500387   66755 kubeadm.go:310] 	--control-plane 
	I1004 04:28:38.500402   66755 kubeadm.go:310] 
	I1004 04:28:38.500478   66755 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 04:28:38.500484   66755 kubeadm.go:310] 
	I1004 04:28:38.500563   66755 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1wt5ey.lvccf2aeyngf9mt3 \
	I1004 04:28:38.500686   66755 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 
	I1004 04:28:38.501820   66755 kubeadm.go:310] W1004 04:28:30.522396    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:28:38.502147   66755 kubeadm.go:310] W1004 04:28:30.524006    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:28:38.502282   66755 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:28:38.502311   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:28:38.502321   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:28:38.504185   66755 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:28:38.505600   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:28:38.518746   66755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:28:38.541311   66755 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:28:38.541422   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:38.541460   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-934812 minikube.k8s.io/updated_at=2024_10_04T04_28_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=embed-certs-934812 minikube.k8s.io/primary=true
	I1004 04:28:38.605537   66755 ops.go:34] apiserver oom_adj: -16
	I1004 04:28:38.765084   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:39.646916   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:41.651456   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:39.265365   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:39.765925   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:40.265135   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:40.766204   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:41.265734   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:41.765404   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:42.265993   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:42.765826   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:43.265776   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:43.353243   66755 kubeadm.go:1113] duration metric: took 4.811892444s to wait for elevateKubeSystemPrivileges
	I1004 04:28:43.353288   66755 kubeadm.go:394] duration metric: took 5m2.586827656s to StartCluster
	I1004 04:28:43.353313   66755 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:28:43.353402   66755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:28:43.355058   66755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:28:43.355309   66755 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:28:43.355388   66755 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:28:43.355533   66755 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-934812"
	I1004 04:28:43.355542   66755 addons.go:69] Setting default-storageclass=true in profile "embed-certs-934812"
	I1004 04:28:43.355556   66755 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-934812"
	I1004 04:28:43.355563   66755 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-934812"
	W1004 04:28:43.355568   66755 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:28:43.355584   66755 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:28:43.355598   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.355639   66755 addons.go:69] Setting metrics-server=true in profile "embed-certs-934812"
	I1004 04:28:43.355658   66755 addons.go:234] Setting addon metrics-server=true in "embed-certs-934812"
	W1004 04:28:43.355666   66755 addons.go:243] addon metrics-server should already be in state true
	I1004 04:28:43.355694   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.356024   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356075   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356095   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.356108   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.356075   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356173   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.357087   66755 out.go:177] * Verifying Kubernetes components...
	I1004 04:28:43.358428   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:28:43.373646   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45715
	I1004 04:28:43.373874   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I1004 04:28:43.374344   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.374344   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.374927   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.374948   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.375003   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.375027   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.375285   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.375342   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.375499   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.375884   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.375928   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.376269   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I1004 04:28:43.376636   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.377073   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.377099   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.377455   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.377883   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.377918   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.378402   66755 addons.go:234] Setting addon default-storageclass=true in "embed-certs-934812"
	W1004 04:28:43.378420   66755 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:28:43.378447   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.378705   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.378734   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.394001   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I1004 04:28:43.394289   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I1004 04:28:43.394645   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.394760   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.395195   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.395213   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.395302   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.395317   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.395596   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.395626   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.395842   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.396120   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.396160   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.397590   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.399391   66755 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:28:43.400581   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:28:43.400598   66755 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:28:43.400619   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.405134   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.405778   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I1004 04:28:43.405968   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.405996   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.406230   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.406383   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.406428   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.406571   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.406698   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.406825   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.406847   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.407455   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.407600   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.409278   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.411006   66755 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:28:40.622426   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:42.623400   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:43.412106   66755 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:28:43.412124   66755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:28:43.412389   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.414167   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I1004 04:28:43.414796   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.415285   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.415309   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.415657   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.415710   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.415911   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.416195   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.416217   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.416440   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.416628   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.416759   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.416856   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.418235   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.418426   66755 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:28:43.418436   66755 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:28:43.418456   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.421305   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.421761   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.421779   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.421966   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.422654   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.422789   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.422877   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.580648   66755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:28:43.615728   66755 node_ready.go:35] waiting up to 6m0s for node "embed-certs-934812" to be "Ready" ...
	I1004 04:28:43.625558   66755 node_ready.go:49] node "embed-certs-934812" has status "Ready":"True"
	I1004 04:28:43.625600   66755 node_ready.go:38] duration metric: took 9.827384ms for node "embed-certs-934812" to be "Ready" ...
	I1004 04:28:43.625612   66755 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:43.634425   66755 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:43.748926   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:28:43.774727   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:28:43.781558   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:28:43.781589   66755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:28:43.838039   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:28:43.838067   66755 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:28:43.945364   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:28:43.945392   66755 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:28:44.005000   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:28:44.253491   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.253521   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.253828   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.253896   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.253910   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.253925   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.253938   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.254130   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.254149   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.254164   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.267367   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.267396   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.267680   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.267700   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.864663   66755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089890385s)
	I1004 04:28:44.864722   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.864734   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.865046   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.865070   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.865086   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.865095   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.866872   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.866877   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.866907   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.138868   66755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133828074s)
	I1004 04:28:45.138926   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:45.138942   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:45.139243   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:45.139265   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.139276   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:45.139283   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:45.139484   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:45.139497   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.139507   66755 addons.go:475] Verifying addon metrics-server=true in "embed-certs-934812"
	I1004 04:28:45.141046   66755 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 04:28:44.147013   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:44.648117   67541 pod_ready.go:82] duration metric: took 4m0.007930603s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	E1004 04:28:44.648144   67541 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 04:28:44.648154   67541 pod_ready.go:39] duration metric: took 4m7.419382357s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:44.648170   67541 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:28:44.648200   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:44.648256   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:44.712473   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:44.712500   67541 cri.go:89] found id: ""
	I1004 04:28:44.712510   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:44.712568   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.717619   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:44.717688   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:44.760036   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:44.760061   67541 cri.go:89] found id: ""
	I1004 04:28:44.760071   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:44.760124   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.766402   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:44.766465   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:44.821766   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:44.821792   67541 cri.go:89] found id: ""
	I1004 04:28:44.821801   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:44.821858   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.826315   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:44.826370   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:44.873526   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:44.873547   67541 cri.go:89] found id: ""
	I1004 04:28:44.873556   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:44.873615   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.878375   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:44.878442   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:44.920240   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:44.920261   67541 cri.go:89] found id: ""
	I1004 04:28:44.920270   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:44.920322   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.925102   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:44.925158   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:44.967386   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:44.967406   67541 cri.go:89] found id: ""
	I1004 04:28:44.967416   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:44.967471   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.971979   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:44.972056   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:45.009842   67541 cri.go:89] found id: ""
	I1004 04:28:45.009869   67541 logs.go:282] 0 containers: []
	W1004 04:28:45.009881   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:45.009890   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:45.009952   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:45.055166   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:45.055189   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:45.055194   67541 cri.go:89] found id: ""
	I1004 04:28:45.055201   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:45.055258   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:45.060362   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:45.066118   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:45.066351   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:45.128185   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:45.128221   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:45.270042   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:45.270084   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:45.309065   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:45.309093   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:45.352299   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:45.352327   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:45.401846   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:45.401882   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:45.447474   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:45.447530   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:45.500734   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:45.500765   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:46.040224   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:46.040275   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:46.112675   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:46.112716   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:46.128530   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:46.128553   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:46.175007   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:46.175039   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:46.222706   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:46.222738   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:44.623804   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:47.122548   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:45.142166   66755 addons.go:510] duration metric: took 1.786788452s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 04:28:45.642731   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:46.641705   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.641730   66755 pod_ready.go:82] duration metric: took 3.007270041s for pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.641743   66755 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.646744   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.646767   66755 pod_ready.go:82] duration metric: took 5.01485ms for pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.646777   66755 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.652554   66755 pod_ready.go:93] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.652572   66755 pod_ready.go:82] duration metric: took 5.78883ms for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.652580   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:48.659404   66755 pod_ready.go:103] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:51.158765   66755 pod_ready.go:93] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.158787   66755 pod_ready.go:82] duration metric: took 4.506200726s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.158796   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.162949   66755 pod_ready.go:93] pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.162967   66755 pod_ready.go:82] duration metric: took 4.16468ms for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.162975   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9czbc" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.167309   66755 pod_ready.go:93] pod "kube-proxy-9czbc" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.167327   66755 pod_ready.go:82] duration metric: took 4.347415ms for pod "kube-proxy-9czbc" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.167334   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.171048   66755 pod_ready.go:93] pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.171065   66755 pod_ready.go:82] duration metric: took 3.724785ms for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.171071   66755 pod_ready.go:39] duration metric: took 7.545445402s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:51.171083   66755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:28:51.171126   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:51.186751   66755 api_server.go:72] duration metric: took 7.831380288s to wait for apiserver process to appear ...
	I1004 04:28:51.186782   66755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:28:51.186799   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:28:51.192753   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 200:
	ok
	I1004 04:28:51.194259   66755 api_server.go:141] control plane version: v1.31.1
	I1004 04:28:51.194284   66755 api_server.go:131] duration metric: took 7.491456ms to wait for apiserver health ...
	I1004 04:28:51.194292   66755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:28:51.241469   66755 system_pods.go:59] 9 kube-system pods found
	I1004 04:28:51.241491   66755 system_pods.go:61] "coredns-7c65d6cfc9-h5tbr" [87deb61f-2ce4-4d45-91da-c16557b5ef75] Running
	I1004 04:28:51.241496   66755 system_pods.go:61] "coredns-7c65d6cfc9-p52s6" [b9b3cd7f-f28e-4502-a55d-7792cfa5a6fe] Running
	I1004 04:28:51.241500   66755 system_pods.go:61] "etcd-embed-certs-934812" [487917e4-1b38-4b84-baf9-eacc1113718e] Running
	I1004 04:28:51.241503   66755 system_pods.go:61] "kube-apiserver-embed-certs-934812" [7fcfc483-3c53-415d-8329-5cae1ecd022f] Running
	I1004 04:28:51.241507   66755 system_pods.go:61] "kube-controller-manager-embed-certs-934812" [9ab25b16-916b-49f7-b34d-a12cf8552769] Running
	I1004 04:28:51.241514   66755 system_pods.go:61] "kube-proxy-9czbc" [dedff5a2-62b6-49c3-8369-9182d1c5bf7a] Running
	I1004 04:28:51.241517   66755 system_pods.go:61] "kube-scheduler-embed-certs-934812" [88a5acd5-108f-482a-ac24-7b75a30428ff] Running
	I1004 04:28:51.241525   66755 system_pods.go:61] "metrics-server-6867b74b74-fh2lk" [12e3e884-2ad3-4eaa-a505-822717e5bc8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:51.241528   66755 system_pods.go:61] "storage-provisioner" [67b4ef22-068c-4d14-840e-deab91c5ab94] Running
	I1004 04:28:51.241534   66755 system_pods.go:74] duration metric: took 47.237476ms to wait for pod list to return data ...
	I1004 04:28:51.241541   66755 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:28:51.438932   66755 default_sa.go:45] found service account: "default"
	I1004 04:28:51.438957   66755 default_sa.go:55] duration metric: took 197.410206ms for default service account to be created ...
	I1004 04:28:51.438966   66755 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:28:51.642064   66755 system_pods.go:86] 9 kube-system pods found
	I1004 04:28:51.642091   66755 system_pods.go:89] "coredns-7c65d6cfc9-h5tbr" [87deb61f-2ce4-4d45-91da-c16557b5ef75] Running
	I1004 04:28:51.642095   66755 system_pods.go:89] "coredns-7c65d6cfc9-p52s6" [b9b3cd7f-f28e-4502-a55d-7792cfa5a6fe] Running
	I1004 04:28:51.642100   66755 system_pods.go:89] "etcd-embed-certs-934812" [487917e4-1b38-4b84-baf9-eacc1113718e] Running
	I1004 04:28:51.642103   66755 system_pods.go:89] "kube-apiserver-embed-certs-934812" [7fcfc483-3c53-415d-8329-5cae1ecd022f] Running
	I1004 04:28:51.642107   66755 system_pods.go:89] "kube-controller-manager-embed-certs-934812" [9ab25b16-916b-49f7-b34d-a12cf8552769] Running
	I1004 04:28:51.642111   66755 system_pods.go:89] "kube-proxy-9czbc" [dedff5a2-62b6-49c3-8369-9182d1c5bf7a] Running
	I1004 04:28:51.642115   66755 system_pods.go:89] "kube-scheduler-embed-certs-934812" [88a5acd5-108f-482a-ac24-7b75a30428ff] Running
	I1004 04:28:51.642121   66755 system_pods.go:89] "metrics-server-6867b74b74-fh2lk" [12e3e884-2ad3-4eaa-a505-822717e5bc8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:51.642124   66755 system_pods.go:89] "storage-provisioner" [67b4ef22-068c-4d14-840e-deab91c5ab94] Running
	I1004 04:28:51.642133   66755 system_pods.go:126] duration metric: took 203.1616ms to wait for k8s-apps to be running ...
	I1004 04:28:51.642139   66755 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:28:51.642176   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:51.658916   66755 system_svc.go:56] duration metric: took 16.763146ms WaitForService to wait for kubelet
	I1004 04:28:51.658948   66755 kubeadm.go:582] duration metric: took 8.303579518s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:28:51.658964   66755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:28:51.839048   66755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:28:51.839067   66755 node_conditions.go:123] node cpu capacity is 2
	I1004 04:28:51.839076   66755 node_conditions.go:105] duration metric: took 180.108785ms to run NodePressure ...
	I1004 04:28:51.839086   66755 start.go:241] waiting for startup goroutines ...
	I1004 04:28:51.839093   66755 start.go:246] waiting for cluster config update ...
	I1004 04:28:51.839103   66755 start.go:255] writing updated cluster config ...
	I1004 04:28:51.839343   66755 ssh_runner.go:195] Run: rm -f paused
	I1004 04:28:51.887283   66755 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:28:51.889326   66755 out.go:177] * Done! kubectl is now configured to use "embed-certs-934812" cluster and "default" namespace by default
	I1004 04:28:48.765066   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:48.780955   67541 api_server.go:72] duration metric: took 4m18.802753607s to wait for apiserver process to appear ...
	I1004 04:28:48.780988   67541 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:28:48.781022   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:48.781074   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:48.817315   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:48.817337   67541 cri.go:89] found id: ""
	I1004 04:28:48.817346   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:48.817406   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.821619   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:48.821676   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:48.860019   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:48.860043   67541 cri.go:89] found id: ""
	I1004 04:28:48.860052   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:48.860101   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.864005   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:48.864065   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:48.901273   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:48.901295   67541 cri.go:89] found id: ""
	I1004 04:28:48.901303   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:48.901353   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.905950   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:48.906007   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:48.939708   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:48.939735   67541 cri.go:89] found id: ""
	I1004 04:28:48.939745   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:48.939812   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.943625   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:48.943692   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:48.979452   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:48.979481   67541 cri.go:89] found id: ""
	I1004 04:28:48.979490   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:48.979550   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.983629   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:48.983692   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:49.021137   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:49.021160   67541 cri.go:89] found id: ""
	I1004 04:28:49.021169   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:49.021242   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.025644   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:49.025712   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:49.062410   67541 cri.go:89] found id: ""
	I1004 04:28:49.062437   67541 logs.go:282] 0 containers: []
	W1004 04:28:49.062447   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:49.062452   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:49.062499   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:49.098959   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:49.098990   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:49.098996   67541 cri.go:89] found id: ""
	I1004 04:28:49.099005   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:49.099067   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.103474   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.107824   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:49.107852   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:49.228249   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:49.228278   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:49.269454   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:49.269479   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:49.305639   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:49.305666   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:49.770318   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:49.770348   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:49.808468   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:49.808493   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:49.884965   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:49.884997   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:49.901874   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:49.901898   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:49.952844   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:49.952869   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:49.986100   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:49.986141   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:50.023082   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:50.023108   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:50.074848   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:50.074876   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:50.112513   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:50.112541   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:52.658644   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:28:52.663076   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 200:
	ok
	I1004 04:28:52.663997   67541 api_server.go:141] control plane version: v1.31.1
	I1004 04:28:52.664017   67541 api_server.go:131] duration metric: took 3.8830221s to wait for apiserver health ...
	I1004 04:28:52.664024   67541 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:28:52.664045   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:52.664085   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:52.704174   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:52.704193   67541 cri.go:89] found id: ""
	I1004 04:28:52.704200   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:52.704253   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.708388   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:52.708438   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:52.743028   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:52.743053   67541 cri.go:89] found id: ""
	I1004 04:28:52.743062   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:52.743108   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.747354   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:52.747405   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:52.782350   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:52.782373   67541 cri.go:89] found id: ""
	I1004 04:28:52.782382   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:52.782424   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.786336   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:52.786394   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:52.826929   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:52.826950   67541 cri.go:89] found id: ""
	I1004 04:28:52.826958   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:52.827018   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.831039   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:52.831094   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:52.865963   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:52.865984   67541 cri.go:89] found id: ""
	I1004 04:28:52.865992   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:52.866032   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.869982   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:52.870024   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:52.919060   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:52.919081   67541 cri.go:89] found id: ""
	I1004 04:28:52.919091   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:52.919139   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.923080   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:52.923131   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:52.962615   67541 cri.go:89] found id: ""
	I1004 04:28:52.962636   67541 logs.go:282] 0 containers: []
	W1004 04:28:52.962643   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:52.962649   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:52.962706   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:52.999914   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:52.999936   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:52.999940   67541 cri.go:89] found id: ""
	I1004 04:28:52.999947   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:52.999998   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:53.003894   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:53.007759   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:53.007776   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:53.021269   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:53.021289   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:53.088683   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:53.088711   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:53.127363   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:53.127387   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:53.163467   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:53.163490   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:53.212683   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:53.212717   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:49.123892   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:51.124121   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:53.124323   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:49.686881   67282 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:28:49.687234   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:28:49.687487   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:28:53.569320   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:53.569360   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:53.644197   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:53.644231   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:53.747465   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:53.747497   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:53.788761   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:53.788798   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:53.822705   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:53.822737   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:53.857525   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:53.857548   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:53.894880   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:53.894904   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:56.455254   67541 system_pods.go:59] 8 kube-system pods found
	I1004 04:28:56.455286   67541 system_pods.go:61] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running
	I1004 04:28:56.455293   67541 system_pods.go:61] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running
	I1004 04:28:56.455299   67541 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running
	I1004 04:28:56.455304   67541 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running
	I1004 04:28:56.455309   67541 system_pods.go:61] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:28:56.455314   67541 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running
	I1004 04:28:56.455322   67541 system_pods.go:61] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:56.455329   67541 system_pods.go:61] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:28:56.455338   67541 system_pods.go:74] duration metric: took 3.791308758s to wait for pod list to return data ...
	I1004 04:28:56.455347   67541 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:28:56.457799   67541 default_sa.go:45] found service account: "default"
	I1004 04:28:56.457817   67541 default_sa.go:55] duration metric: took 2.463452ms for default service account to be created ...
	I1004 04:28:56.457825   67541 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:28:56.462569   67541 system_pods.go:86] 8 kube-system pods found
	I1004 04:28:56.462593   67541 system_pods.go:89] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running
	I1004 04:28:56.462601   67541 system_pods.go:89] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running
	I1004 04:28:56.462608   67541 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running
	I1004 04:28:56.462615   67541 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running
	I1004 04:28:56.462620   67541 system_pods.go:89] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:28:56.462626   67541 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running
	I1004 04:28:56.462632   67541 system_pods.go:89] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:56.462637   67541 system_pods.go:89] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:28:56.462645   67541 system_pods.go:126] duration metric: took 4.814032ms to wait for k8s-apps to be running ...
	I1004 04:28:56.462657   67541 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:28:56.462749   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:56.478944   67541 system_svc.go:56] duration metric: took 16.282384ms WaitForService to wait for kubelet
	I1004 04:28:56.478966   67541 kubeadm.go:582] duration metric: took 4m26.500769346s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:28:56.478982   67541 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:28:56.481946   67541 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:28:56.481968   67541 node_conditions.go:123] node cpu capacity is 2
	I1004 04:28:56.481980   67541 node_conditions.go:105] duration metric: took 2.992423ms to run NodePressure ...
	I1004 04:28:56.481993   67541 start.go:241] waiting for startup goroutines ...
	I1004 04:28:56.482006   67541 start.go:246] waiting for cluster config update ...
	I1004 04:28:56.482018   67541 start.go:255] writing updated cluster config ...
	I1004 04:28:56.482450   67541 ssh_runner.go:195] Run: rm -f paused
	I1004 04:28:56.528299   67541 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:28:56.530289   67541 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-281471" cluster and "default" namespace by default
	I1004 04:28:55.625569   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:58.122544   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:54.687773   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:28:54.688026   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:29:00.124374   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:02.624622   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:05.123726   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:07.622036   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:04.688599   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:29:04.688808   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:29:09.623060   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:11.623590   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:12.123919   66293 pod_ready.go:82] duration metric: took 4m0.007496621s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	E1004 04:29:12.123939   66293 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 04:29:12.123946   66293 pod_ready.go:39] duration metric: took 4m3.607239118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:29:12.123960   66293 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:29:12.123985   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:12.124023   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:12.174748   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:12.174767   66293 cri.go:89] found id: ""
	I1004 04:29:12.174775   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:12.174823   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.179374   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:12.179436   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:12.219617   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:12.219637   66293 cri.go:89] found id: ""
	I1004 04:29:12.219646   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:12.219699   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.223774   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:12.223844   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:12.261339   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:12.261360   66293 cri.go:89] found id: ""
	I1004 04:29:12.261369   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:12.261424   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.265364   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:12.265414   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:12.313178   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:12.313197   66293 cri.go:89] found id: ""
	I1004 04:29:12.313206   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:12.313271   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.317440   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:12.317498   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:12.353037   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:12.353054   66293 cri.go:89] found id: ""
	I1004 04:29:12.353072   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:12.353125   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.357212   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:12.357272   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:12.392082   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:12.392106   66293 cri.go:89] found id: ""
	I1004 04:29:12.392115   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:12.392167   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.396333   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:12.396395   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:12.439298   66293 cri.go:89] found id: ""
	I1004 04:29:12.439329   66293 logs.go:282] 0 containers: []
	W1004 04:29:12.439337   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:12.439343   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:12.439387   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:12.478798   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:12.478814   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:12.478818   66293 cri.go:89] found id: ""
	I1004 04:29:12.478824   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:12.478866   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.483035   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.486977   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:12.486992   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:12.520849   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:12.520875   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:13.072628   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:13.072671   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:13.137973   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:13.138000   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:13.259585   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:13.259611   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:13.312315   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:13.312340   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:13.352351   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:13.352377   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:13.391319   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:13.391352   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:13.430681   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:13.430712   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:13.464929   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:13.464957   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:13.505312   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:13.505340   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:13.520476   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:13.520517   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:13.582723   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:13.582752   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:16.131437   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:29:16.150426   66293 api_server.go:72] duration metric: took 4m14.921074088s to wait for apiserver process to appear ...
	I1004 04:29:16.150457   66293 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:29:16.150498   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:16.150559   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:16.197236   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:16.197265   66293 cri.go:89] found id: ""
	I1004 04:29:16.197275   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:16.197341   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.202103   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:16.202187   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:16.236881   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:16.236907   66293 cri.go:89] found id: ""
	I1004 04:29:16.236916   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:16.236976   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.241220   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:16.241289   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:16.275727   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:16.275750   66293 cri.go:89] found id: ""
	I1004 04:29:16.275759   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:16.275828   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.280282   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:16.280352   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:16.320297   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:16.320323   66293 cri.go:89] found id: ""
	I1004 04:29:16.320332   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:16.320386   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.324982   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:16.325038   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:16.367062   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:16.367081   66293 cri.go:89] found id: ""
	I1004 04:29:16.367089   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:16.367143   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.371124   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:16.371182   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:16.405706   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:16.405728   66293 cri.go:89] found id: ""
	I1004 04:29:16.405738   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:16.405785   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.410027   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:16.410084   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:16.444937   66293 cri.go:89] found id: ""
	I1004 04:29:16.444961   66293 logs.go:282] 0 containers: []
	W1004 04:29:16.444971   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:16.444978   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:16.445032   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:16.480123   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:16.480153   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:16.480160   66293 cri.go:89] found id: ""
	I1004 04:29:16.480168   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:16.480228   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.484216   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.488156   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:16.488177   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:16.501573   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:16.501591   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:16.600789   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:16.600814   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:16.641604   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:16.641634   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:16.696735   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:16.696764   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:16.737153   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:16.737177   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:17.188490   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:17.188546   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:17.262072   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:17.262108   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:17.310881   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:17.310911   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:17.356105   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:17.356135   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:17.398916   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:17.398948   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:17.440122   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:17.440149   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:17.482529   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:17.482553   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:20.034163   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:29:20.039165   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I1004 04:29:20.040105   66293 api_server.go:141] control plane version: v1.31.1
	I1004 04:29:20.040124   66293 api_server.go:131] duration metric: took 3.889660333s to wait for apiserver health ...
	I1004 04:29:20.040131   66293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:29:20.040156   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:20.040203   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:20.078208   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:20.078234   66293 cri.go:89] found id: ""
	I1004 04:29:20.078244   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:20.078306   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.082751   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:20.082808   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:20.128002   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:20.128024   66293 cri.go:89] found id: ""
	I1004 04:29:20.128034   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:20.128084   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.132039   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:20.132097   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:20.171887   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:20.171911   66293 cri.go:89] found id: ""
	I1004 04:29:20.171921   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:20.171978   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.176095   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:20.176150   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:20.215155   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:20.215175   66293 cri.go:89] found id: ""
	I1004 04:29:20.215183   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:20.215241   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.219738   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:20.219814   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:20.256116   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:20.256134   66293 cri.go:89] found id: ""
	I1004 04:29:20.256142   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:20.256194   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.261201   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:20.261281   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:20.302328   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:20.302350   66293 cri.go:89] found id: ""
	I1004 04:29:20.302359   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:20.302414   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.306488   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:20.306551   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:20.341266   66293 cri.go:89] found id: ""
	I1004 04:29:20.341290   66293 logs.go:282] 0 containers: []
	W1004 04:29:20.341300   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:20.341307   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:20.341361   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:20.379560   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:20.379584   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:20.379589   66293 cri.go:89] found id: ""
	I1004 04:29:20.379598   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:20.379653   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.383816   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.388118   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:20.388137   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:20.487661   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:20.487686   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:20.539728   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:20.539754   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:20.577435   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:20.577463   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:20.616450   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:20.616480   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:20.658292   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:20.658316   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:20.733483   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:20.733515   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:20.749004   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:20.749033   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:20.799355   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:20.799383   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:20.839676   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:20.839699   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:20.874870   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:20.874896   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:20.912635   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:20.912658   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:20.968377   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:20.968405   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:23.820462   66293 system_pods.go:59] 8 kube-system pods found
	I1004 04:29:23.820491   66293 system_pods.go:61] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running
	I1004 04:29:23.820497   66293 system_pods.go:61] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running
	I1004 04:29:23.820501   66293 system_pods.go:61] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running
	I1004 04:29:23.820506   66293 system_pods.go:61] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running
	I1004 04:29:23.820514   66293 system_pods.go:61] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running
	I1004 04:29:23.820517   66293 system_pods.go:61] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running
	I1004 04:29:23.820524   66293 system_pods.go:61] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:29:23.820529   66293 system_pods.go:61] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running
	I1004 04:29:23.820537   66293 system_pods.go:74] duration metric: took 3.780400092s to wait for pod list to return data ...
	I1004 04:29:23.820544   66293 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:29:23.823119   66293 default_sa.go:45] found service account: "default"
	I1004 04:29:23.823137   66293 default_sa.go:55] duration metric: took 2.58707ms for default service account to be created ...
	I1004 04:29:23.823144   66293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:29:23.827365   66293 system_pods.go:86] 8 kube-system pods found
	I1004 04:29:23.827385   66293 system_pods.go:89] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running
	I1004 04:29:23.827389   66293 system_pods.go:89] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running
	I1004 04:29:23.827393   66293 system_pods.go:89] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running
	I1004 04:29:23.827397   66293 system_pods.go:89] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running
	I1004 04:29:23.827400   66293 system_pods.go:89] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running
	I1004 04:29:23.827405   66293 system_pods.go:89] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running
	I1004 04:29:23.827410   66293 system_pods.go:89] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:29:23.827415   66293 system_pods.go:89] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running
	I1004 04:29:23.827422   66293 system_pods.go:126] duration metric: took 4.27475ms to wait for k8s-apps to be running ...
	I1004 04:29:23.827428   66293 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:29:23.827468   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:29:23.844696   66293 system_svc.go:56] duration metric: took 17.261418ms WaitForService to wait for kubelet
	I1004 04:29:23.844724   66293 kubeadm.go:582] duration metric: took 4m22.61537826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:29:23.844746   66293 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:29:23.847873   66293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:29:23.847892   66293 node_conditions.go:123] node cpu capacity is 2
	I1004 04:29:23.847902   66293 node_conditions.go:105] duration metric: took 3.149916ms to run NodePressure ...
	I1004 04:29:23.847915   66293 start.go:241] waiting for startup goroutines ...
	I1004 04:29:23.847923   66293 start.go:246] waiting for cluster config update ...
	I1004 04:29:23.847932   66293 start.go:255] writing updated cluster config ...
	I1004 04:29:23.848202   66293 ssh_runner.go:195] Run: rm -f paused
	I1004 04:29:23.894092   66293 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:29:23.895736   66293 out.go:177] * Done! kubectl is now configured to use "no-preload-658545" cluster and "default" namespace by default
	I1004 04:29:24.690241   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:29:24.690419   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:04.692816   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:04.693091   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:04.693114   67282 kubeadm.go:310] 
	I1004 04:30:04.693149   67282 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:30:04.693214   67282 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:30:04.693236   67282 kubeadm.go:310] 
	I1004 04:30:04.693295   67282 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:30:04.693327   67282 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:30:04.693451   67282 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:30:04.693460   67282 kubeadm.go:310] 
	I1004 04:30:04.693568   67282 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:30:04.693614   67282 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:30:04.693668   67282 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:30:04.693688   67282 kubeadm.go:310] 
	I1004 04:30:04.693843   67282 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:30:04.693966   67282 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:30:04.693982   67282 kubeadm.go:310] 
	I1004 04:30:04.694097   67282 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:30:04.694218   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:30:04.694305   67282 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:30:04.694387   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:30:04.694399   67282 kubeadm.go:310] 
	I1004 04:30:04.695379   67282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:30:04.695478   67282 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:30:04.695566   67282 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1004 04:30:04.695695   67282 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1004 04:30:04.695742   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:30:05.153635   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:30:05.170057   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:30:05.179541   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:30:05.179563   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:30:05.179611   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:30:05.188969   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:30:05.189025   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:30:05.198049   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:30:05.207031   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:30:05.207118   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:30:05.216934   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:30:05.226477   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:30:05.226541   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:30:05.236222   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:30:05.245314   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:30:05.245374   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:30:05.255762   67282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:30:05.329816   67282 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:30:05.329953   67282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:30:05.482342   67282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:30:05.482549   67282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:30:05.482692   67282 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:30:05.666400   67282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:30:05.668115   67282 out.go:235]   - Generating certificates and keys ...
	I1004 04:30:05.668217   67282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:30:05.668319   67282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:30:05.668460   67282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:30:05.668562   67282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:30:05.668660   67282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:30:05.668734   67282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:30:05.668823   67282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:30:05.668905   67282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:30:05.669010   67282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:30:05.669130   67282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:30:05.669186   67282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:30:05.669269   67282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:30:05.773446   67282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:30:05.823736   67282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:30:05.951294   67282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:30:06.250340   67282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:30:06.275797   67282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:30:06.276877   67282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:30:06.276944   67282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:30:06.437286   67282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:30:06.438849   67282 out.go:235]   - Booting up control plane ...
	I1004 04:30:06.438952   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:30:06.443688   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:30:06.444596   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:30:06.445267   67282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:30:06.457334   67282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:30:46.456706   67282 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:30:46.456854   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:46.457117   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:51.456986   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:51.457240   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:31:01.457062   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:31:01.457288   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:31:21.456976   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:31:21.457277   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:32:01.456978   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:32:01.457225   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:32:01.457249   67282 kubeadm.go:310] 
	I1004 04:32:01.457312   67282 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:32:01.457374   67282 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:32:01.457383   67282 kubeadm.go:310] 
	I1004 04:32:01.457434   67282 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:32:01.457512   67282 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:32:01.457678   67282 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:32:01.457692   67282 kubeadm.go:310] 
	I1004 04:32:01.457838   67282 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:32:01.457892   67282 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:32:01.457946   67282 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:32:01.457957   67282 kubeadm.go:310] 
	I1004 04:32:01.458102   67282 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:32:01.458217   67282 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:32:01.458233   67282 kubeadm.go:310] 
	I1004 04:32:01.458379   67282 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:32:01.458494   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:32:01.458604   67282 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:32:01.458699   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:32:01.458710   67282 kubeadm.go:310] 
	I1004 04:32:01.459157   67282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:32:01.459272   67282 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:32:01.459386   67282 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1004 04:32:01.459464   67282 kubeadm.go:394] duration metric: took 7m57.553695137s to StartCluster
	I1004 04:32:01.459522   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:32:01.459586   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:32:01.500997   67282 cri.go:89] found id: ""
	I1004 04:32:01.501026   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.501037   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:32:01.501044   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:32:01.501102   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:32:01.537240   67282 cri.go:89] found id: ""
	I1004 04:32:01.537276   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.537288   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:32:01.537295   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:32:01.537349   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:32:01.573959   67282 cri.go:89] found id: ""
	I1004 04:32:01.573995   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.574007   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:32:01.574013   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:32:01.574074   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:32:01.610614   67282 cri.go:89] found id: ""
	I1004 04:32:01.610645   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.610657   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:32:01.610665   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:32:01.610716   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:32:01.645520   67282 cri.go:89] found id: ""
	I1004 04:32:01.645554   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.645567   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:32:01.645574   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:32:01.645640   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:32:01.679787   67282 cri.go:89] found id: ""
	I1004 04:32:01.679814   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.679823   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:32:01.679828   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:32:01.679873   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:32:01.714860   67282 cri.go:89] found id: ""
	I1004 04:32:01.714883   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.714891   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:32:01.714897   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:32:01.714952   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:32:01.761170   67282 cri.go:89] found id: ""
	I1004 04:32:01.761198   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.761208   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:32:01.761220   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:32:01.761232   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:32:01.822966   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:32:01.823006   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:32:01.839482   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:32:01.839510   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:32:01.917863   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:32:01.917887   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:32:01.917901   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:32:02.027216   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:32:02.027247   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1004 04:32:02.069804   67282 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1004 04:32:02.069852   67282 out.go:270] * 
	W1004 04:32:02.069922   67282 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:32:02.069939   67282 out.go:270] * 
	W1004 04:32:02.070740   67282 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 04:32:02.074308   67282 out.go:201] 
	W1004 04:32:02.075387   67282 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:32:02.075427   67282 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1004 04:32:02.075458   67282 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1004 04:32:02.076675   67282 out.go:201] 
	
	
	==> CRI-O <==
	Oct 04 04:37:53 embed-certs-934812 crio[706]: time="2024-10-04 04:37:53.967997270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7d4ffd8-4d8c-49e8-bf66-79641785e807 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:37:53 embed-certs-934812 crio[706]: time="2024-10-04 04:37:53.969148730Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f051963-1f96-43b3-90d3-383043a0ac07 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:37:53 embed-certs-934812 crio[706]: time="2024-10-04 04:37:53.969712889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016673969688142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f051963-1f96-43b3-90d3-383043a0ac07 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:37:53 embed-certs-934812 crio[706]: time="2024-10-04 04:37:53.970504479Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7719a6e2-0c2b-4ef8-b9e3-1cfe718ee372 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:53 embed-certs-934812 crio[706]: time="2024-10-04 04:37:53.970776999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7719a6e2-0c2b-4ef8-b9e3-1cfe718ee372 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:53 embed-certs-934812 crio[706]: time="2024-10-04 04:37:53.971264860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee2305c441f29525be535936a0d26d917c6641ba676c0cd946dd389e33592d0e,PodSandboxId:ce87104926ac6615ffe2a06ef2da4e5300660538068e1a9b4b4e2da9c6007bab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728016125307438513,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b4ef22-068c-4d14-840e-deab91c5ab94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbdcd3a324f421f3f0dc2aff76b89a9da3e57f5627e61474562ff8414be5510,PodSandboxId:b40d8c44f59da73246613d9666638c42077bc69f9b86d11b17059cc8b2acc9b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124144804273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5tbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87deb61f-2ce4-4d45-91da-c16557b5ef75,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188076ac7a7af46670dbbdee6a82210ef511cdb240986543a3a8c664b4b0cb3f,PodSandboxId:4c42d4b4b1430d0c5826a18a6325df53f7738d59c8e79b4fb9ef91294730f56a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124135602860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p52s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
9b3cd7f-f28e-4502-a55d-7792cfa5a6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4cec58f8215a7fb666645448240662fbda789f0558c0926b20f9f0958517b9,PodSandboxId:c8ba948590195942a8e3f251cdcfe6f39a0d657c4d719591b6df20174cddc5bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728016123867478706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9czbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedff5a2-62b6-49c3-8369-9182d1c5bf7a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f368c0bb224d66a098cacaa67038f90f3d29fb0beb9bef1fee3cc4bba10ab34,PodSandboxId:04daab29e1a1fcbb05c9905daafdb7e8a4fe30f5c7714977d33aebd4a2afed05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728016112715730070,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395dacd00dc811c334e4fda7898664c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bca5274feb5a67a3e722abb2ee79d56e93ff3ff1cf0350f2dc0eadf6af6764,PodSandboxId:2a81540c23b0366c2d0063654b570487e3d57ef145c5363cd2ea803ab87301b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728016112672617127,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c206ade11d659fd6eef7ef29aa408cde,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b379c78d8a9fc71afe4dbc57caf6f54b81d515dce69290f74a4aaf7a111fab0,PodSandboxId:fb51e9c9fda9bc66053c3f42630875ded0784fabd965b6cd4f255ecd2e6f59db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728016112660334881,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e126f795bcf640ac6233faca19ff5b5e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be902a556db8d8f7a0a1a76bc3064baa5240668a920e4640fefb2198eaafa08b,PodSandboxId:94aae5132834a78902d1424736bb9dd45722628c7813d3da821ab29d14247c97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728016112627866859,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73de2741451a14dbc69d5790883240975641ebae82f1af6f9acbd6d71c52000e,PodSandboxId:25174628a7c5d2aaeae687c122ee504f625f3fe09c9af3c9057bb1728b7ec3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015823040740414,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7719a6e2-0c2b-4ef8-b9e3-1cfe718ee372 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:53 embed-certs-934812 crio[706]: time="2024-10-04 04:37:53.975087981Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5b019859-621d-44c3-9255-3bf1ab7d0cd7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 04:37:53 embed-certs-934812 crio[706]: time="2024-10-04 04:37:53.975407010Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a83b07a5700702d721fe1139285f589729f80dd48d8d610d2ef11a53b68620b7,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-fh2lk,Uid:12e3e884-2ad3-4eaa-a505-822717e5bc8c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728016125294539076,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-fh2lk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e3e884-2ad3-4eaa-a505-822717e5bc8c,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T04:28:44.976944994Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce87104926ac6615ffe2a06ef2da4e5300660538068e1a9b4b4e2da9c6007bab,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:67b4ef22-068c-4d14-840e-deab91c5ab94,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728016125157160854,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b4ef22-068c-4d14-840e-deab91c5ab94,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-04T04:28:44.846174759Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c8ba948590195942a8e3f251cdcfe6f39a0d657c4d719591b6df20174cddc5bf,Metadata:&PodSandboxMetadata{Name:kube-proxy-9czbc,Uid:dedff5a2-62b6-49c3-8369-9182d1c5bf7a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728016123617981643,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9czbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedff5a2-62b6-49c3-8369-9182d1c5bf7a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T04:28:42.689568404Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4c42d4b4b1430d0c5826a18a6325df53f7738d59c8e79b4fb9ef91294730f56a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-p52s6,Ui
d:b9b3cd7f-f28e-4502-a55d-7792cfa5a6fe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728016123423376683,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-p52s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9b3cd7f-f28e-4502-a55d-7792cfa5a6fe,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T04:28:43.109094401Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b40d8c44f59da73246613d9666638c42077bc69f9b86d11b17059cc8b2acc9b5,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-h5tbr,Uid:87deb61f-2ce4-4d45-91da-c16557b5ef75,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728016123393131850,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5tbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87deb61f-2ce4-4d45-91da-c16557b5ef75,k8s-app: kube-dns,pod-templa
te-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T04:28:43.070728264Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:94aae5132834a78902d1424736bb9dd45722628c7813d3da821ab29d14247c97,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-934812,Uid:7eb498a93de1326f90b260031f2ed41b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728016112454044699,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.74:8443,kubernetes.io/config.hash: 7eb498a93de1326f90b260031f2ed41b,kubernetes.io/config.seen: 2024-10-04T04:28:31.972444901Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fb51e9c9fda9bc66053c3f4263087
5ded0784fabd965b6cd4f255ecd2e6f59db,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-934812,Uid:e126f795bcf640ac6233faca19ff5b5e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728016112447030792,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e126f795bcf640ac6233faca19ff5b5e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e126f795bcf640ac6233faca19ff5b5e,kubernetes.io/config.seen: 2024-10-04T04:28:31.972449234Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2a81540c23b0366c2d0063654b570487e3d57ef145c5363cd2ea803ab87301b4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-934812,Uid:c206ade11d659fd6eef7ef29aa408cde,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728016112432175413,Labels:map[string]string{component: kub
e-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c206ade11d659fd6eef7ef29aa408cde,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c206ade11d659fd6eef7ef29aa408cde,kubernetes.io/config.seen: 2024-10-04T04:28:31.972450377Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:04daab29e1a1fcbb05c9905daafdb7e8a4fe30f5c7714977d33aebd4a2afed05,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-934812,Uid:b395dacd00dc811c334e4fda7898664c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728016112430631947,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395dacd00dc811c334e4fda7898664c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61
.74:2379,kubernetes.io/config.hash: b395dacd00dc811c334e4fda7898664c,kubernetes.io/config.seen: 2024-10-04T04:28:31.972451518Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:25174628a7c5d2aaeae687c122ee504f625f3fe09c9af3c9057bb1728b7ec3af,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-934812,Uid:7eb498a93de1326f90b260031f2ed41b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1728015822693554211,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.74:8443,kubernetes.io/config.hash: 7eb498a93de1326f90b260031f2ed41b,kubernetes.io/config.seen: 2024-10-04T04:23:42.199861960Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collect
or/interceptors.go:74" id=5b019859-621d-44c3-9255-3bf1ab7d0cd7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 04:37:53 embed-certs-934812 crio[706]: time="2024-10-04 04:37:53.976581640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96331ecb-3b1f-41b7-a3be-230c5a67098f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:53 embed-certs-934812 crio[706]: time="2024-10-04 04:37:53.976679777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96331ecb-3b1f-41b7-a3be-230c5a67098f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:53 embed-certs-934812 crio[706]: time="2024-10-04 04:37:53.976871747Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee2305c441f29525be535936a0d26d917c6641ba676c0cd946dd389e33592d0e,PodSandboxId:ce87104926ac6615ffe2a06ef2da4e5300660538068e1a9b4b4e2da9c6007bab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728016125307438513,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b4ef22-068c-4d14-840e-deab91c5ab94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbdcd3a324f421f3f0dc2aff76b89a9da3e57f5627e61474562ff8414be5510,PodSandboxId:b40d8c44f59da73246613d9666638c42077bc69f9b86d11b17059cc8b2acc9b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124144804273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5tbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87deb61f-2ce4-4d45-91da-c16557b5ef75,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188076ac7a7af46670dbbdee6a82210ef511cdb240986543a3a8c664b4b0cb3f,PodSandboxId:4c42d4b4b1430d0c5826a18a6325df53f7738d59c8e79b4fb9ef91294730f56a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124135602860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p52s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
9b3cd7f-f28e-4502-a55d-7792cfa5a6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4cec58f8215a7fb666645448240662fbda789f0558c0926b20f9f0958517b9,PodSandboxId:c8ba948590195942a8e3f251cdcfe6f39a0d657c4d719591b6df20174cddc5bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728016123867478706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9czbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedff5a2-62b6-49c3-8369-9182d1c5bf7a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f368c0bb224d66a098cacaa67038f90f3d29fb0beb9bef1fee3cc4bba10ab34,PodSandboxId:04daab29e1a1fcbb05c9905daafdb7e8a4fe30f5c7714977d33aebd4a2afed05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728016112715730070,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395dacd00dc811c334e4fda7898664c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bca5274feb5a67a3e722abb2ee79d56e93ff3ff1cf0350f2dc0eadf6af6764,PodSandboxId:2a81540c23b0366c2d0063654b570487e3d57ef145c5363cd2ea803ab87301b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728016112672617127,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c206ade11d659fd6eef7ef29aa408cde,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b379c78d8a9fc71afe4dbc57caf6f54b81d515dce69290f74a4aaf7a111fab0,PodSandboxId:fb51e9c9fda9bc66053c3f42630875ded0784fabd965b6cd4f255ecd2e6f59db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728016112660334881,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e126f795bcf640ac6233faca19ff5b5e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be902a556db8d8f7a0a1a76bc3064baa5240668a920e4640fefb2198eaafa08b,PodSandboxId:94aae5132834a78902d1424736bb9dd45722628c7813d3da821ab29d14247c97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728016112627866859,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73de2741451a14dbc69d5790883240975641ebae82f1af6f9acbd6d71c52000e,PodSandboxId:25174628a7c5d2aaeae687c122ee504f625f3fe09c9af3c9057bb1728b7ec3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015823040740414,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96331ecb-3b1f-41b7-a3be-230c5a67098f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:54 embed-certs-934812 crio[706]: time="2024-10-04 04:37:54.017894000Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=353396ee-3013-46c5-a5b3-7e267c265a87 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:37:54 embed-certs-934812 crio[706]: time="2024-10-04 04:37:54.018024378Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=353396ee-3013-46c5-a5b3-7e267c265a87 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:37:54 embed-certs-934812 crio[706]: time="2024-10-04 04:37:54.019464915Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=480da31b-b079-420c-bc79-dc27940abae9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:37:54 embed-certs-934812 crio[706]: time="2024-10-04 04:37:54.019961029Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016674019928000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=480da31b-b079-420c-bc79-dc27940abae9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:37:54 embed-certs-934812 crio[706]: time="2024-10-04 04:37:54.020807923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e97219ee-64df-4d07-9777-3a8d1eca128e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:54 embed-certs-934812 crio[706]: time="2024-10-04 04:37:54.020898706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e97219ee-64df-4d07-9777-3a8d1eca128e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:54 embed-certs-934812 crio[706]: time="2024-10-04 04:37:54.021158629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee2305c441f29525be535936a0d26d917c6641ba676c0cd946dd389e33592d0e,PodSandboxId:ce87104926ac6615ffe2a06ef2da4e5300660538068e1a9b4b4e2da9c6007bab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728016125307438513,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b4ef22-068c-4d14-840e-deab91c5ab94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbdcd3a324f421f3f0dc2aff76b89a9da3e57f5627e61474562ff8414be5510,PodSandboxId:b40d8c44f59da73246613d9666638c42077bc69f9b86d11b17059cc8b2acc9b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124144804273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5tbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87deb61f-2ce4-4d45-91da-c16557b5ef75,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188076ac7a7af46670dbbdee6a82210ef511cdb240986543a3a8c664b4b0cb3f,PodSandboxId:4c42d4b4b1430d0c5826a18a6325df53f7738d59c8e79b4fb9ef91294730f56a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124135602860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p52s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
9b3cd7f-f28e-4502-a55d-7792cfa5a6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4cec58f8215a7fb666645448240662fbda789f0558c0926b20f9f0958517b9,PodSandboxId:c8ba948590195942a8e3f251cdcfe6f39a0d657c4d719591b6df20174cddc5bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728016123867478706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9czbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedff5a2-62b6-49c3-8369-9182d1c5bf7a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f368c0bb224d66a098cacaa67038f90f3d29fb0beb9bef1fee3cc4bba10ab34,PodSandboxId:04daab29e1a1fcbb05c9905daafdb7e8a4fe30f5c7714977d33aebd4a2afed05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728016112715730070,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395dacd00dc811c334e4fda7898664c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bca5274feb5a67a3e722abb2ee79d56e93ff3ff1cf0350f2dc0eadf6af6764,PodSandboxId:2a81540c23b0366c2d0063654b570487e3d57ef145c5363cd2ea803ab87301b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728016112672617127,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c206ade11d659fd6eef7ef29aa408cde,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b379c78d8a9fc71afe4dbc57caf6f54b81d515dce69290f74a4aaf7a111fab0,PodSandboxId:fb51e9c9fda9bc66053c3f42630875ded0784fabd965b6cd4f255ecd2e6f59db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728016112660334881,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e126f795bcf640ac6233faca19ff5b5e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be902a556db8d8f7a0a1a76bc3064baa5240668a920e4640fefb2198eaafa08b,PodSandboxId:94aae5132834a78902d1424736bb9dd45722628c7813d3da821ab29d14247c97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728016112627866859,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73de2741451a14dbc69d5790883240975641ebae82f1af6f9acbd6d71c52000e,PodSandboxId:25174628a7c5d2aaeae687c122ee504f625f3fe09c9af3c9057bb1728b7ec3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015823040740414,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e97219ee-64df-4d07-9777-3a8d1eca128e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:54 embed-certs-934812 crio[706]: time="2024-10-04 04:37:54.057447299Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9964d414-a557-4eea-a847-61981620394b name=/runtime.v1.RuntimeService/Version
	Oct 04 04:37:54 embed-certs-934812 crio[706]: time="2024-10-04 04:37:54.057758547Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9964d414-a557-4eea-a847-61981620394b name=/runtime.v1.RuntimeService/Version
	Oct 04 04:37:54 embed-certs-934812 crio[706]: time="2024-10-04 04:37:54.059273656Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bda472a2-ed14-4203-bb39-72ea74ed7625 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:37:54 embed-certs-934812 crio[706]: time="2024-10-04 04:37:54.059655600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016674059634474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bda472a2-ed14-4203-bb39-72ea74ed7625 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:37:54 embed-certs-934812 crio[706]: time="2024-10-04 04:37:54.060113663Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6a4fae7-c4c3-4839-8382-7b7970fa8fd1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:54 embed-certs-934812 crio[706]: time="2024-10-04 04:37:54.060173358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6a4fae7-c4c3-4839-8382-7b7970fa8fd1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:54 embed-certs-934812 crio[706]: time="2024-10-04 04:37:54.060414262Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee2305c441f29525be535936a0d26d917c6641ba676c0cd946dd389e33592d0e,PodSandboxId:ce87104926ac6615ffe2a06ef2da4e5300660538068e1a9b4b4e2da9c6007bab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728016125307438513,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b4ef22-068c-4d14-840e-deab91c5ab94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbdcd3a324f421f3f0dc2aff76b89a9da3e57f5627e61474562ff8414be5510,PodSandboxId:b40d8c44f59da73246613d9666638c42077bc69f9b86d11b17059cc8b2acc9b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124144804273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5tbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87deb61f-2ce4-4d45-91da-c16557b5ef75,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188076ac7a7af46670dbbdee6a82210ef511cdb240986543a3a8c664b4b0cb3f,PodSandboxId:4c42d4b4b1430d0c5826a18a6325df53f7738d59c8e79b4fb9ef91294730f56a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124135602860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p52s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
9b3cd7f-f28e-4502-a55d-7792cfa5a6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4cec58f8215a7fb666645448240662fbda789f0558c0926b20f9f0958517b9,PodSandboxId:c8ba948590195942a8e3f251cdcfe6f39a0d657c4d719591b6df20174cddc5bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728016123867478706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9czbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedff5a2-62b6-49c3-8369-9182d1c5bf7a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f368c0bb224d66a098cacaa67038f90f3d29fb0beb9bef1fee3cc4bba10ab34,PodSandboxId:04daab29e1a1fcbb05c9905daafdb7e8a4fe30f5c7714977d33aebd4a2afed05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728016112715730070,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395dacd00dc811c334e4fda7898664c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bca5274feb5a67a3e722abb2ee79d56e93ff3ff1cf0350f2dc0eadf6af6764,PodSandboxId:2a81540c23b0366c2d0063654b570487e3d57ef145c5363cd2ea803ab87301b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728016112672617127,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c206ade11d659fd6eef7ef29aa408cde,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b379c78d8a9fc71afe4dbc57caf6f54b81d515dce69290f74a4aaf7a111fab0,PodSandboxId:fb51e9c9fda9bc66053c3f42630875ded0784fabd965b6cd4f255ecd2e6f59db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728016112660334881,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e126f795bcf640ac6233faca19ff5b5e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be902a556db8d8f7a0a1a76bc3064baa5240668a920e4640fefb2198eaafa08b,PodSandboxId:94aae5132834a78902d1424736bb9dd45722628c7813d3da821ab29d14247c97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728016112627866859,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73de2741451a14dbc69d5790883240975641ebae82f1af6f9acbd6d71c52000e,PodSandboxId:25174628a7c5d2aaeae687c122ee504f625f3fe09c9af3c9057bb1728b7ec3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015823040740414,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6a4fae7-c4c3-4839-8382-7b7970fa8fd1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ee2305c441f29       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   ce87104926ac6       storage-provisioner
	3cbdcd3a324f4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   b40d8c44f59da       coredns-7c65d6cfc9-h5tbr
	188076ac7a7af       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   4c42d4b4b1430       coredns-7c65d6cfc9-p52s6
	ae4cec58f8215       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   c8ba948590195       kube-proxy-9czbc
	3f368c0bb224d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   04daab29e1a1f       etcd-embed-certs-934812
	25bca5274feb5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   2a81540c23b03       kube-scheduler-embed-certs-934812
	7b379c78d8a9f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   fb51e9c9fda9b       kube-controller-manager-embed-certs-934812
	be902a556db8d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   94aae5132834a       kube-apiserver-embed-certs-934812
	73de2741451a1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   25174628a7c5d       kube-apiserver-embed-certs-934812
	
	
	==> coredns [188076ac7a7af46670dbbdee6a82210ef511cdb240986543a3a8c664b4b0cb3f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [3cbdcd3a324f421f3f0dc2aff76b89a9da3e57f5627e61474562ff8414be5510] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-934812
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-934812
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=embed-certs-934812
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T04_28_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 04:28:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-934812
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 04:37:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 04:33:55 +0000   Fri, 04 Oct 2024 04:28:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 04:33:55 +0000   Fri, 04 Oct 2024 04:28:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 04:33:55 +0000   Fri, 04 Oct 2024 04:28:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 04:33:55 +0000   Fri, 04 Oct 2024 04:28:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.74
	  Hostname:    embed-certs-934812
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 360498a1f1444edcb55e87f15c79d8ba
	  System UUID:                360498a1-f144-4edc-b55e-87f15c79d8ba
	  Boot ID:                    401fba8b-79f6-4889-8e22-9516f8ae8624
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-h5tbr                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 coredns-7c65d6cfc9-p52s6                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 etcd-embed-certs-934812                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-embed-certs-934812             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-controller-manager-embed-certs-934812    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-proxy-9czbc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-embed-certs-934812             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 metrics-server-6867b74b74-fh2lk               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m9s                   kube-proxy       
	  Normal  Starting                 9m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet          Node embed-certs-934812 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet          Node embed-certs-934812 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x7 over 9m22s)  kubelet          Node embed-certs-934812 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s                  kubelet          Node embed-certs-934812 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s                  kubelet          Node embed-certs-934812 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s                  kubelet          Node embed-certs-934812 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s                  node-controller  Node embed-certs-934812 event: Registered Node embed-certs-934812 in Controller
	
	
	==> dmesg <==
	[  +0.051138] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040154] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.813636] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.507720] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.394041] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.011587] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.055747] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061478] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.183766] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.156398] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.303105] systemd-fstab-generator[697]: Ignoring "noauto" option for root device
	[  +4.328839] systemd-fstab-generator[787]: Ignoring "noauto" option for root device
	[  +0.063432] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.733439] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +5.631506] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.320677] kauditd_printk_skb: 85 callbacks suppressed
	[Oct 4 04:28] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.173021] systemd-fstab-generator[2586]: Ignoring "noauto" option for root device
	[  +4.574723] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.542898] systemd-fstab-generator[2906]: Ignoring "noauto" option for root device
	[  +5.851208] systemd-fstab-generator[3042]: Ignoring "noauto" option for root device
	[  +0.047545] kauditd_printk_skb: 14 callbacks suppressed
	[Oct 4 04:29] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [3f368c0bb224d66a098cacaa67038f90f3d29fb0beb9bef1fee3cc4bba10ab34] <==
	{"level":"info","ts":"2024-10-04T04:28:33.175490Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6a7e013021d70f0","initial-advertise-peer-urls":["https://192.168.61.74:2380"],"listen-peer-urls":["https://192.168.61.74:2380"],"advertise-client-urls":["https://192.168.61.74:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.74:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-04T04:28:33.175610Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-04T04:28:33.169278Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.74:2380"}
	{"level":"info","ts":"2024-10-04T04:28:33.175698Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.74:2380"}
	{"level":"info","ts":"2024-10-04T04:28:33.181524Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6b8a659e8e86db88","local-member-id":"6a7e013021d70f0","added-peer-id":"6a7e013021d70f0","added-peer-peer-urls":["https://192.168.61.74:2380"]}
	{"level":"info","ts":"2024-10-04T04:28:33.600284Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a7e013021d70f0 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-04T04:28:33.600421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a7e013021d70f0 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-04T04:28:33.600469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a7e013021d70f0 received MsgPreVoteResp from 6a7e013021d70f0 at term 1"}
	{"level":"info","ts":"2024-10-04T04:28:33.600502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a7e013021d70f0 became candidate at term 2"}
	{"level":"info","ts":"2024-10-04T04:28:33.600526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a7e013021d70f0 received MsgVoteResp from 6a7e013021d70f0 at term 2"}
	{"level":"info","ts":"2024-10-04T04:28:33.600596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a7e013021d70f0 became leader at term 2"}
	{"level":"info","ts":"2024-10-04T04:28:33.600626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6a7e013021d70f0 elected leader 6a7e013021d70f0 at term 2"}
	{"level":"info","ts":"2024-10-04T04:28:33.604439Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6a7e013021d70f0","local-member-attributes":"{Name:embed-certs-934812 ClientURLs:[https://192.168.61.74:2379]}","request-path":"/0/members/6a7e013021d70f0/attributes","cluster-id":"6b8a659e8e86db88","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T04:28:33.604587Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:28:33.604631Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:28:33.605098Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T04:28:33.605127Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T04:28:33.605283Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:28:33.607821Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:28:33.610981Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T04:28:33.614402Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:28:33.624727Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.74:2379"}
	{"level":"info","ts":"2024-10-04T04:28:33.614978Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6b8a659e8e86db88","local-member-id":"6a7e013021d70f0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:28:33.640809Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:28:33.653402Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 04:37:54 up 14 min,  0 users,  load average: 0.01, 0.11, 0.12
	Linux embed-certs-934812 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [73de2741451a14dbc69d5790883240975641ebae82f1af6f9acbd6d71c52000e] <==
	W1004 04:28:28.975793       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.020126       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.059908       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.068473       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.093322       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.153978       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.172829       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.228172       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.229651       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.423801       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.457772       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.594699       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.633632       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.633659       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.700101       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.712039       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.735643       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.788383       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.797922       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.819883       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.849307       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.947886       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:30.075118       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:30.078694       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:30.102089       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [be902a556db8d8f7a0a1a76bc3064baa5240668a920e4640fefb2198eaafa08b] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1004 04:33:36.213721       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:33:36.213791       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1004 04:33:36.214777       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:33:36.214834       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 04:34:36.215611       1 handler_proxy.go:99] no RequestInfo found in the context
	W1004 04:34:36.215633       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:34:36.215922       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1004 04:34:36.215926       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1004 04:34:36.218048       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:34:36.218090       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 04:36:36.218834       1 handler_proxy.go:99] no RequestInfo found in the context
	W1004 04:36:36.218903       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:36:36.219250       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1004 04:36:36.219415       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1004 04:36:36.221151       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:36:36.221294       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7b379c78d8a9fc71afe4dbc57caf6f54b81d515dce69290f74a4aaf7a111fab0] <==
	E1004 04:32:42.124439       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:32:42.662681       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:33:12.130531       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:33:12.670274       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:33:42.137601       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:33:42.677843       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 04:33:55.445055       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-934812"
	E1004 04:34:12.145694       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:34:12.685339       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 04:34:29.939963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="293.999µs"
	I1004 04:34:40.937348       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="286.352µs"
	E1004 04:34:42.152792       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:34:42.694965       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:35:12.160023       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:35:12.702623       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:35:42.166693       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:35:42.710909       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:36:12.173310       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:36:12.719031       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:36:42.181266       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:36:42.728688       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:37:12.188662       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:37:12.737117       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:37:42.194975       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:37:42.745179       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ae4cec58f8215a7fb666645448240662fbda789f0558c0926b20f9f0958517b9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 04:28:44.651309       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 04:28:44.668251       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.74"]
	E1004 04:28:44.668320       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 04:28:44.835400       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 04:28:44.839884       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 04:28:44.839910       1 server_linux.go:169] "Using iptables Proxier"
	I1004 04:28:44.864586       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 04:28:44.867565       1 server.go:483] "Version info" version="v1.31.1"
	I1004 04:28:44.867587       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:28:44.875410       1 config.go:199] "Starting service config controller"
	I1004 04:28:44.875456       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 04:28:44.875488       1 config.go:105] "Starting endpoint slice config controller"
	I1004 04:28:44.875492       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 04:28:44.875998       1 config.go:328] "Starting node config controller"
	I1004 04:28:44.876005       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 04:28:44.980313       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 04:28:45.001871       1 shared_informer.go:320] Caches are synced for service config
	I1004 04:28:45.008468       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [25bca5274feb5a67a3e722abb2ee79d56e93ff3ff1cf0350f2dc0eadf6af6764] <==
	W1004 04:28:36.071459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 04:28:36.071501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.105401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 04:28:36.105516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.140621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 04:28:36.140691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.165999       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 04:28:36.166095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.199374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 04:28:36.199501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.269460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 04:28:36.269655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.347395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 04:28:36.347448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.441774       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 04:28:36.441825       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1004 04:28:36.480988       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 04:28:36.481043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.500958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 04:28:36.501020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.552930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 04:28:36.553613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.580411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 04:28:36.580464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 04:28:38.325842       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 04:36:48 embed-certs-934812 kubelet[2913]: E1004 04:36:48.068781    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016608068520362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:36:48 embed-certs-934812 kubelet[2913]: E1004 04:36:48.068823    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016608068520362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:36:54 embed-certs-934812 kubelet[2913]: E1004 04:36:54.920103    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fh2lk" podUID="12e3e884-2ad3-4eaa-a505-822717e5bc8c"
	Oct 04 04:36:58 embed-certs-934812 kubelet[2913]: E1004 04:36:58.070795    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016618070386733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:36:58 embed-certs-934812 kubelet[2913]: E1004 04:36:58.071116    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016618070386733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:06 embed-certs-934812 kubelet[2913]: E1004 04:37:06.919867    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fh2lk" podUID="12e3e884-2ad3-4eaa-a505-822717e5bc8c"
	Oct 04 04:37:08 embed-certs-934812 kubelet[2913]: E1004 04:37:08.074073    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016628073567211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:08 embed-certs-934812 kubelet[2913]: E1004 04:37:08.074115    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016628073567211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:17 embed-certs-934812 kubelet[2913]: E1004 04:37:17.921507    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fh2lk" podUID="12e3e884-2ad3-4eaa-a505-822717e5bc8c"
	Oct 04 04:37:18 embed-certs-934812 kubelet[2913]: E1004 04:37:18.075908    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016638075460226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:18 embed-certs-934812 kubelet[2913]: E1004 04:37:18.076065    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016638075460226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:28 embed-certs-934812 kubelet[2913]: E1004 04:37:28.077427    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016648077139209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:28 embed-certs-934812 kubelet[2913]: E1004 04:37:28.077473    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016648077139209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:29 embed-certs-934812 kubelet[2913]: E1004 04:37:29.922429    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fh2lk" podUID="12e3e884-2ad3-4eaa-a505-822717e5bc8c"
	Oct 04 04:37:37 embed-certs-934812 kubelet[2913]: E1004 04:37:37.962534    2913 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 04:37:37 embed-certs-934812 kubelet[2913]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 04:37:37 embed-certs-934812 kubelet[2913]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 04:37:37 embed-certs-934812 kubelet[2913]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 04:37:37 embed-certs-934812 kubelet[2913]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 04:37:38 embed-certs-934812 kubelet[2913]: E1004 04:37:38.080287    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016658079575374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:38 embed-certs-934812 kubelet[2913]: E1004 04:37:38.080364    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016658079575374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:40 embed-certs-934812 kubelet[2913]: E1004 04:37:40.920176    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fh2lk" podUID="12e3e884-2ad3-4eaa-a505-822717e5bc8c"
	Oct 04 04:37:48 embed-certs-934812 kubelet[2913]: E1004 04:37:48.082808    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016668082133337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:48 embed-certs-934812 kubelet[2913]: E1004 04:37:48.083073    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016668082133337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:53 embed-certs-934812 kubelet[2913]: E1004 04:37:53.921985    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fh2lk" podUID="12e3e884-2ad3-4eaa-a505-822717e5bc8c"
	
	
	==> storage-provisioner [ee2305c441f29525be535936a0d26d917c6641ba676c0cd946dd389e33592d0e] <==
	I1004 04:28:45.430117       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 04:28:45.445326       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 04:28:45.445390       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 04:28:45.458072       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 04:28:45.458880       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-934812_a13376cf-89b1-44f7-9229-91123f906dfe!
	I1004 04:28:45.465338       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5655821d-afa4-442d-a23b-224ce4c930c8", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-934812_a13376cf-89b1-44f7-9229-91123f906dfe became leader
	I1004 04:28:45.559436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-934812_a13376cf-89b1-44f7-9229-91123f906dfe!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-934812 -n embed-certs-934812
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-934812 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-fh2lk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-934812 describe pod metrics-server-6867b74b74-fh2lk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-934812 describe pod metrics-server-6867b74b74-fh2lk: exit status 1 (65.25116ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-fh2lk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-934812 describe pod metrics-server-6867b74b74-fh2lk: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-04 04:37:57.04047084 +0000 UTC m=+6595.973411391
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-281471 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-281471 logs -n 25: (2.016125546s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-934812            | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-617497             | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617497                  | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617497 --memory=2200 --alsologtostderr   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-617497 image list                           | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:18 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658545                  | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281471  | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-420062        | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-934812                 | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:19 UTC | 04 Oct 24 04:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-420062             | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281471       | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC | 04 Oct 24 04:28 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 04:21:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 04:21:23.276574   67541 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:21:23.276701   67541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:21:23.276710   67541 out.go:358] Setting ErrFile to fd 2...
	I1004 04:21:23.276715   67541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:21:23.276893   67541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:21:23.277439   67541 out.go:352] Setting JSON to false
	I1004 04:21:23.278387   67541 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7428,"bootTime":1728008255,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:21:23.278482   67541 start.go:139] virtualization: kvm guest
	I1004 04:21:23.280571   67541 out.go:177] * [default-k8s-diff-port-281471] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:21:23.282033   67541 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:21:23.282063   67541 notify.go:220] Checking for updates...
	I1004 04:21:23.284454   67541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:21:23.285843   67541 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:21:23.287026   67541 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:21:23.288328   67541 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:21:23.289544   67541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:21:23.291321   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:21:23.291979   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:21:23.292059   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:21:23.306995   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I1004 04:21:23.307440   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:21:23.308080   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:21:23.308106   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:21:23.308442   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:21:23.308642   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:21:23.308893   67541 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:21:23.309208   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:21:23.309280   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:21:23.323807   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I1004 04:21:23.324281   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:21:23.324777   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:21:23.324797   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:21:23.325085   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:21:23.325248   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:21:23.359916   67541 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 04:21:23.361482   67541 start.go:297] selected driver: kvm2
	I1004 04:21:23.361504   67541 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:21:23.361657   67541 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:21:23.362533   67541 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:21:23.362621   67541 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:21:23.378088   67541 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:21:23.378515   67541 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:21:23.378547   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:21:23.378591   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:21:23.378627   67541 start.go:340] cluster config:
	{Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:21:23.378727   67541 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:21:23.380705   67541 out.go:177] * Starting "default-k8s-diff-port-281471" primary control-plane node in "default-k8s-diff-port-281471" cluster
	I1004 04:21:20.068102   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:23.140106   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:23.381986   67541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:21:23.382036   67541 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 04:21:23.382048   67541 cache.go:56] Caching tarball of preloaded images
	I1004 04:21:23.382125   67541 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:21:23.382135   67541 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 04:21:23.382254   67541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/config.json ...
	I1004 04:21:23.382433   67541 start.go:360] acquireMachinesLock for default-k8s-diff-port-281471: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:21:29.220163   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:32.292105   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:38.372080   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:41.444091   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:47.524103   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:50.596091   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:56.676086   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:59.748055   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:05.828125   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:08.900042   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:14.980094   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:18.052114   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:24.132087   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:27.204139   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:33.284040   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:36.356076   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:42.436190   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:45.508075   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:51.588061   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:54.660042   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:00.740141   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:03.812099   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:09.892076   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:12.964133   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:15.968919   66755 start.go:364] duration metric: took 4m6.72532498s to acquireMachinesLock for "embed-certs-934812"
	I1004 04:23:15.968984   66755 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:15.968992   66755 fix.go:54] fixHost starting: 
	I1004 04:23:15.969309   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:15.969356   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:15.984739   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I1004 04:23:15.985214   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:15.985743   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:23:15.985769   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:15.986104   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:15.986289   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:15.986449   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:23:15.988237   66755 fix.go:112] recreateIfNeeded on embed-certs-934812: state=Stopped err=<nil>
	I1004 04:23:15.988263   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	W1004 04:23:15.988415   66755 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:15.990473   66755 out.go:177] * Restarting existing kvm2 VM for "embed-certs-934812" ...
	I1004 04:23:15.965929   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:15.965974   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:23:15.966321   66293 buildroot.go:166] provisioning hostname "no-preload-658545"
	I1004 04:23:15.966348   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:23:15.966530   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:23:15.968760   66293 machine.go:96] duration metric: took 4m37.423316886s to provisionDockerMachine
	I1004 04:23:15.968806   66293 fix.go:56] duration metric: took 4m37.446149084s for fixHost
	I1004 04:23:15.968814   66293 start.go:83] releasing machines lock for "no-preload-658545", held for 4m37.446179902s
	W1004 04:23:15.968836   66293 start.go:714] error starting host: provision: host is not running
	W1004 04:23:15.968935   66293 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1004 04:23:15.968946   66293 start.go:729] Will try again in 5 seconds ...
	I1004 04:23:15.991914   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Start
	I1004 04:23:15.992106   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring networks are active...
	I1004 04:23:15.992995   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring network default is active
	I1004 04:23:15.993392   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring network mk-embed-certs-934812 is active
	I1004 04:23:15.993728   66755 main.go:141] libmachine: (embed-certs-934812) Getting domain xml...
	I1004 04:23:15.994410   66755 main.go:141] libmachine: (embed-certs-934812) Creating domain...
	I1004 04:23:17.232262   66755 main.go:141] libmachine: (embed-certs-934812) Waiting to get IP...
	I1004 04:23:17.233339   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.233793   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.233879   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.233797   67957 retry.go:31] will retry after 221.075745ms: waiting for machine to come up
	I1004 04:23:17.456413   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.456917   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.456941   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.456869   67957 retry.go:31] will retry after 354.386237ms: waiting for machine to come up
	I1004 04:23:17.812523   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.812949   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.812973   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.812905   67957 retry.go:31] will retry after 338.999517ms: waiting for machine to come up
	I1004 04:23:18.153589   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:18.154029   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:18.154056   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:18.153987   67957 retry.go:31] will retry after 555.533205ms: waiting for machine to come up
	I1004 04:23:18.710680   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:18.711155   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:18.711181   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:18.711104   67957 retry.go:31] will retry after 733.812197ms: waiting for machine to come up
	I1004 04:23:20.970507   66293 start.go:360] acquireMachinesLock for no-preload-658545: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:23:19.447202   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:19.447644   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:19.447671   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:19.447600   67957 retry.go:31] will retry after 575.303848ms: waiting for machine to come up
	I1004 04:23:20.024465   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:20.024788   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:20.024819   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:20.024735   67957 retry.go:31] will retry after 894.593683ms: waiting for machine to come up
	I1004 04:23:20.920880   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:20.921499   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:20.921522   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:20.921480   67957 retry.go:31] will retry after 924.978895ms: waiting for machine to come up
	I1004 04:23:21.848064   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:21.848498   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:21.848619   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:21.848550   67957 retry.go:31] will retry after 1.554806984s: waiting for machine to come up
	I1004 04:23:23.404569   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:23.404936   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:23.404964   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:23.404884   67957 retry.go:31] will retry after 1.700496318s: waiting for machine to come up
	I1004 04:23:25.106988   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:25.107410   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:25.107441   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:25.107351   67957 retry.go:31] will retry after 1.913555474s: waiting for machine to come up
	I1004 04:23:27.022672   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:27.023134   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:27.023161   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:27.023096   67957 retry.go:31] will retry after 3.208946613s: waiting for machine to come up
	I1004 04:23:30.235462   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:30.235910   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:30.235942   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:30.235868   67957 retry.go:31] will retry after 3.125545279s: waiting for machine to come up
	I1004 04:23:33.364563   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.365007   66755 main.go:141] libmachine: (embed-certs-934812) Found IP for machine: 192.168.61.74
	I1004 04:23:33.365031   66755 main.go:141] libmachine: (embed-certs-934812) Reserving static IP address...
	I1004 04:23:33.365047   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has current primary IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.365595   66755 main.go:141] libmachine: (embed-certs-934812) Reserved static IP address: 192.168.61.74
	I1004 04:23:33.365628   66755 main.go:141] libmachine: (embed-certs-934812) Waiting for SSH to be available...
	I1004 04:23:33.365648   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "embed-certs-934812", mac: "52:54:00:25:fb:50", ip: "192.168.61.74"} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.365667   66755 main.go:141] libmachine: (embed-certs-934812) DBG | skip adding static IP to network mk-embed-certs-934812 - found existing host DHCP lease matching {name: "embed-certs-934812", mac: "52:54:00:25:fb:50", ip: "192.168.61.74"}
	I1004 04:23:33.365682   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Getting to WaitForSSH function...
	I1004 04:23:33.367835   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.368155   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.368185   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.368297   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Using SSH client type: external
	I1004 04:23:33.368322   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa (-rw-------)
	I1004 04:23:33.368359   66755 main.go:141] libmachine: (embed-certs-934812) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:23:33.368369   66755 main.go:141] libmachine: (embed-certs-934812) DBG | About to run SSH command:
	I1004 04:23:33.368377   66755 main.go:141] libmachine: (embed-certs-934812) DBG | exit 0
	I1004 04:23:33.496067   66755 main.go:141] libmachine: (embed-certs-934812) DBG | SSH cmd err, output: <nil>: 
	I1004 04:23:33.496559   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetConfigRaw
	I1004 04:23:33.497310   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:33.500858   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.501360   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.501403   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.501750   66755 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/config.json ...
	I1004 04:23:33.502058   66755 machine.go:93] provisionDockerMachine start ...
	I1004 04:23:33.502084   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:33.502303   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.505899   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.506442   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.506475   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.506686   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.506947   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.507165   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.507324   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.507541   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.507744   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.507757   66755 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:23:33.624518   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:23:33.624547   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.624795   66755 buildroot.go:166] provisioning hostname "embed-certs-934812"
	I1004 04:23:33.624826   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.625021   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.627597   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.627916   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.627948   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.628115   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.628312   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.628444   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.628608   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.628785   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.629023   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.629040   66755 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-934812 && echo "embed-certs-934812" | sudo tee /etc/hostname
	I1004 04:23:33.758642   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-934812
	
	I1004 04:23:33.758681   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.761325   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.761654   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.761696   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.761849   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.762034   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.762164   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.762297   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.762426   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.762636   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.762652   66755 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-934812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-934812/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-934812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:23:33.889571   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:33.889601   66755 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:23:33.889642   66755 buildroot.go:174] setting up certificates
	I1004 04:23:33.889654   66755 provision.go:84] configureAuth start
	I1004 04:23:33.889681   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.889992   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:33.892657   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.893063   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.893087   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.893310   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.895770   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.896126   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.896162   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.896328   66755 provision.go:143] copyHostCerts
	I1004 04:23:33.896397   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:23:33.896408   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:23:33.896472   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:23:33.896565   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:23:33.896573   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:23:33.896595   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:23:33.896652   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:23:33.896659   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:23:33.896678   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:23:33.896724   66755 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.embed-certs-934812 san=[127.0.0.1 192.168.61.74 embed-certs-934812 localhost minikube]
	I1004 04:23:33.997867   66755 provision.go:177] copyRemoteCerts
	I1004 04:23:33.997923   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:23:33.997950   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.001050   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.001422   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.001461   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.001733   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.001961   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.002125   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.002246   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.090823   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:23:34.116934   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1004 04:23:34.669084   67282 start.go:364] duration metric: took 2m46.052475725s to acquireMachinesLock for "old-k8s-version-420062"
	I1004 04:23:34.669158   67282 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:34.669168   67282 fix.go:54] fixHost starting: 
	I1004 04:23:34.669584   67282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:34.669640   67282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:34.686790   67282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I1004 04:23:34.687312   67282 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:34.687829   67282 main.go:141] libmachine: Using API Version  1
	I1004 04:23:34.687857   67282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:34.688238   67282 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:34.688415   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:34.688579   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetState
	I1004 04:23:34.690288   67282 fix.go:112] recreateIfNeeded on old-k8s-version-420062: state=Stopped err=<nil>
	I1004 04:23:34.690326   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	W1004 04:23:34.690467   67282 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:34.692283   67282 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-420062" ...
	I1004 04:23:34.143763   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 04:23:34.168897   66755 provision.go:87] duration metric: took 279.227966ms to configureAuth
	I1004 04:23:34.168929   66755 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:23:34.169096   66755 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:23:34.169168   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.171638   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.171952   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.171977   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.172178   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.172349   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.172503   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.172594   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.172717   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:34.172924   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:34.172943   66755 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:23:34.411661   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:23:34.411690   66755 machine.go:96] duration metric: took 909.61315ms to provisionDockerMachine
	I1004 04:23:34.411703   66755 start.go:293] postStartSetup for "embed-certs-934812" (driver="kvm2")
	I1004 04:23:34.411716   66755 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:23:34.411734   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.412070   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:23:34.412099   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.415246   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.415583   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.415643   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.415802   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.415997   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.416170   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.416322   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.507385   66755 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:23:34.511963   66755 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:23:34.511990   66755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:23:34.512064   66755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:23:34.512152   66755 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:23:34.512270   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:23:34.522375   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:34.547860   66755 start.go:296] duration metric: took 136.143527ms for postStartSetup
	I1004 04:23:34.547904   66755 fix.go:56] duration metric: took 18.578910472s for fixHost
	I1004 04:23:34.547931   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.550715   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.551031   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.551067   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.551194   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.551391   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.551568   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.551724   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.551903   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:34.552055   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:34.552064   66755 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:23:34.668944   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015814.641353752
	
	I1004 04:23:34.668966   66755 fix.go:216] guest clock: 1728015814.641353752
	I1004 04:23:34.668974   66755 fix.go:229] Guest: 2024-10-04 04:23:34.641353752 +0000 UTC Remote: 2024-10-04 04:23:34.547909289 +0000 UTC m=+265.449211021 (delta=93.444463ms)
	I1004 04:23:34.668993   66755 fix.go:200] guest clock delta is within tolerance: 93.444463ms
	I1004 04:23:34.668999   66755 start.go:83] releasing machines lock for "embed-certs-934812", held for 18.70003051s
	I1004 04:23:34.669024   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.669299   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:34.672346   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.672757   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.672796   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.672966   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673609   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673816   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673940   66755 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:23:34.673982   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.674020   66755 ssh_runner.go:195] Run: cat /version.json
	I1004 04:23:34.674043   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.676934   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677085   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677379   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.677406   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677449   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.677480   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677560   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.677677   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.677758   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.677811   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.677873   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.677928   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.677979   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.678022   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.761509   66755 ssh_runner.go:195] Run: systemctl --version
	I1004 04:23:34.784487   66755 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:23:34.934037   66755 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:23:34.942569   66755 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:23:34.942642   66755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:23:34.960164   66755 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:23:34.960197   66755 start.go:495] detecting cgroup driver to use...
	I1004 04:23:34.960276   66755 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:23:34.979195   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:23:34.994660   66755 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:23:34.994747   66755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:23:35.011209   66755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:23:35.031746   66755 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:23:35.146164   66755 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:23:35.287092   66755 docker.go:233] disabling docker service ...
	I1004 04:23:35.287167   66755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:23:35.308007   66755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:23:35.323235   66755 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:23:35.473583   66755 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:23:35.610098   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:23:35.624276   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:23:35.643810   66755 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:23:35.643873   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.655804   66755 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:23:35.655875   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.668260   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.679770   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.692649   66755 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:23:35.704364   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.715539   66755 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.739272   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.754538   66755 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:23:35.766476   66755 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:23:35.766566   66755 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:23:35.781677   66755 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:23:35.792640   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:35.910787   66755 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:23:36.015877   66755 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:23:36.015948   66755 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:23:36.021573   66755 start.go:563] Will wait 60s for crictl version
	I1004 04:23:36.021642   66755 ssh_runner.go:195] Run: which crictl
	I1004 04:23:36.025605   66755 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:23:36.064644   66755 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:23:36.064714   66755 ssh_runner.go:195] Run: crio --version
	I1004 04:23:36.094751   66755 ssh_runner.go:195] Run: crio --version
	I1004 04:23:36.127213   66755 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:23:34.693590   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .Start
	I1004 04:23:34.693792   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring networks are active...
	I1004 04:23:34.694582   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network default is active
	I1004 04:23:34.694917   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network mk-old-k8s-version-420062 is active
	I1004 04:23:34.695322   67282 main.go:141] libmachine: (old-k8s-version-420062) Getting domain xml...
	I1004 04:23:34.696052   67282 main.go:141] libmachine: (old-k8s-version-420062) Creating domain...
	I1004 04:23:35.995511   67282 main.go:141] libmachine: (old-k8s-version-420062) Waiting to get IP...
	I1004 04:23:35.996465   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:35.996962   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:35.997031   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:35.996923   68093 retry.go:31] will retry after 296.620059ms: waiting for machine to come up
	I1004 04:23:36.295737   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:36.296226   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:36.296257   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:36.296182   68093 retry.go:31] will retry after 311.736827ms: waiting for machine to come up
	I1004 04:23:36.610158   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:36.610804   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:36.610829   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:36.610759   68093 retry.go:31] will retry after 440.646496ms: waiting for machine to come up
	I1004 04:23:37.053487   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:37.053956   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:37.053981   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:37.053923   68093 retry.go:31] will retry after 550.190101ms: waiting for machine to come up
	I1004 04:23:37.605404   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:37.605775   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:37.605815   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:37.605743   68093 retry.go:31] will retry after 721.648529ms: waiting for machine to come up
	I1004 04:23:38.328819   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:38.329323   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:38.329362   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:38.329281   68093 retry.go:31] will retry after 825.234448ms: waiting for machine to come up
	I1004 04:23:36.128549   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:36.131439   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:36.131827   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:36.131856   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:36.132054   66755 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1004 04:23:36.136650   66755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:36.149563   66755 kubeadm.go:883] updating cluster {Name:embed-certs-934812 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:23:36.149691   66755 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:23:36.149738   66755 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:36.188235   66755 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:23:36.188316   66755 ssh_runner.go:195] Run: which lz4
	I1004 04:23:36.192619   66755 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:23:36.196876   66755 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:23:36.196909   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:23:37.711672   66755 crio.go:462] duration metric: took 1.519102092s to copy over tarball
	I1004 04:23:37.711752   66755 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:23:39.155736   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:39.156199   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:39.156229   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:39.156150   68093 retry.go:31] will retry after 970.793402ms: waiting for machine to come up
	I1004 04:23:40.128963   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:40.129454   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:40.129507   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:40.129419   68093 retry.go:31] will retry after 1.460395601s: waiting for machine to come up
	I1004 04:23:41.592145   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:41.592653   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:41.592677   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:41.592600   68093 retry.go:31] will retry after 1.397092356s: waiting for machine to come up
	I1004 04:23:42.992176   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:42.992670   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:42.992724   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:42.992663   68093 retry.go:31] will retry after 1.560294099s: waiting for machine to come up
	I1004 04:23:39.864408   66755 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.152629063s)
	I1004 04:23:39.864437   66755 crio.go:469] duration metric: took 2.152732931s to extract the tarball
	I1004 04:23:39.864446   66755 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:23:39.902496   66755 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:39.956348   66755 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:23:39.956373   66755 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:23:39.956381   66755 kubeadm.go:934] updating node { 192.168.61.74 8443 v1.31.1 crio true true} ...
	I1004 04:23:39.956509   66755 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-934812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:23:39.956572   66755 ssh_runner.go:195] Run: crio config
	I1004 04:23:40.014396   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:23:40.014423   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:23:40.014436   66755 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:23:40.014470   66755 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.74 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-934812 NodeName:embed-certs-934812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:23:40.014642   66755 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-934812"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:23:40.014728   66755 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:23:40.025328   66755 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:23:40.025441   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:23:40.035733   66755 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1004 04:23:40.057427   66755 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:23:40.078636   66755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1004 04:23:40.100583   66755 ssh_runner.go:195] Run: grep 192.168.61.74	control-plane.minikube.internal$ /etc/hosts
	I1004 04:23:40.104780   66755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:40.118484   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:40.245425   66755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:23:40.268739   66755 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812 for IP: 192.168.61.74
	I1004 04:23:40.268764   66755 certs.go:194] generating shared ca certs ...
	I1004 04:23:40.268792   66755 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:23:40.268962   66755 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:23:40.269022   66755 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:23:40.269035   66755 certs.go:256] generating profile certs ...
	I1004 04:23:40.269145   66755 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/client.key
	I1004 04:23:40.269226   66755 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.key.0181efa9
	I1004 04:23:40.269290   66755 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.key
	I1004 04:23:40.269436   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:23:40.269483   66755 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:23:40.269497   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:23:40.269535   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:23:40.269575   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:23:40.269607   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:23:40.269658   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:40.270269   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:23:40.316579   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:23:40.352928   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:23:40.383124   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:23:40.410211   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1004 04:23:40.442388   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:23:40.473580   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:23:40.501589   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:23:40.527299   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:23:40.551994   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:23:40.576644   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:23:40.601518   66755 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:23:40.620092   66755 ssh_runner.go:195] Run: openssl version
	I1004 04:23:40.626451   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:23:40.637754   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.642413   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.642472   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.648449   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:23:40.659371   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:23:40.670276   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.674793   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.674844   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.680550   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:23:40.691439   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:23:40.702237   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.706876   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.706937   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.712970   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:23:40.724505   66755 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:23:40.729486   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:23:40.735720   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:23:40.741680   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:23:40.747975   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:23:40.754056   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:23:40.760235   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:23:40.766463   66755 kubeadm.go:392] StartCluster: {Name:embed-certs-934812 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:23:40.766576   66755 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:23:40.766635   66755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:23:40.805927   66755 cri.go:89] found id: ""
	I1004 04:23:40.805995   66755 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:23:40.816693   66755 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:23:40.816717   66755 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:23:40.816770   66755 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:23:40.827024   66755 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:23:40.828056   66755 kubeconfig.go:125] found "embed-certs-934812" server: "https://192.168.61.74:8443"
	I1004 04:23:40.830076   66755 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:23:40.840637   66755 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.74
	I1004 04:23:40.840673   66755 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:23:40.840686   66755 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:23:40.840741   66755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:23:40.877659   66755 cri.go:89] found id: ""
	I1004 04:23:40.877737   66755 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:23:40.894712   66755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:23:40.904202   66755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:23:40.904224   66755 kubeadm.go:157] found existing configuration files:
	
	I1004 04:23:40.904290   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:23:40.913941   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:23:40.914003   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:23:40.924730   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:23:40.934706   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:23:40.934784   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:23:40.945008   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:23:40.954864   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:23:40.954949   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:23:40.965357   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:23:40.975380   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:23:40.975459   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:23:40.986157   66755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:23:41.001260   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:41.129150   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:41.839910   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.059079   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.132717   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.204227   66755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:23:42.204389   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:42.704572   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.205099   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.704555   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.720983   66755 api_server.go:72] duration metric: took 1.516755506s to wait for apiserver process to appear ...
	I1004 04:23:43.721020   66755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:23:43.721043   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.578729   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:23:46.578764   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:23:46.578780   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.611578   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:23:46.611609   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:23:46.721894   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.728611   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:46.728649   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:47.221889   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:47.229348   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:47.229382   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:47.721971   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:47.741433   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:47.741460   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:48.222154   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:48.226802   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 200:
	ok
	I1004 04:23:48.233611   66755 api_server.go:141] control plane version: v1.31.1
	I1004 04:23:48.233645   66755 api_server.go:131] duration metric: took 4.512616682s to wait for apiserver health ...
	I1004 04:23:48.233655   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:23:48.233662   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:23:48.235421   66755 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:23:44.555619   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:44.556128   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:44.556154   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:44.556061   68093 retry.go:31] will retry after 2.564674777s: waiting for machine to come up
	I1004 04:23:47.123819   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:47.124235   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:47.124263   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:47.124181   68093 retry.go:31] will retry after 2.408805702s: waiting for machine to come up
	I1004 04:23:48.236675   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:23:48.248304   66755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:23:48.273584   66755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:23:48.288132   66755 system_pods.go:59] 8 kube-system pods found
	I1004 04:23:48.288174   66755 system_pods.go:61] "coredns-7c65d6cfc9-z7pqn" [f206a8bf-5c18-49f2-9fae-a48a38d608a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:23:48.288208   66755 system_pods.go:61] "etcd-embed-certs-934812" [07a8f2db-6d47-469b-b0e4-749d1e106522] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:23:48.288218   66755 system_pods.go:61] "kube-apiserver-embed-certs-934812" [f36bc69a-a04e-40c2-8f78-a983ddbf28aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:23:48.288227   66755 system_pods.go:61] "kube-controller-manager-embed-certs-934812" [06d73118-fa31-4c98-b1e8-099611718b19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:23:48.288232   66755 system_pods.go:61] "kube-proxy-9qpgb" [6d833f16-4b8e-4409-99b6-214babe699c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 04:23:48.288238   66755 system_pods.go:61] "kube-scheduler-embed-certs-934812" [d076a245-49b6-4d8b-949a-2b559cd1d4d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:23:48.288243   66755 system_pods.go:61] "metrics-server-6867b74b74-d5b6b" [f4ec5d83-22a7-49e5-97e9-3519a29484fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:23:48.288250   66755 system_pods.go:61] "storage-provisioner" [2e76a95b-d6e2-4c1d-b954-3da8c2670a4a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 04:23:48.288259   66755 system_pods.go:74] duration metric: took 14.644463ms to wait for pod list to return data ...
	I1004 04:23:48.288265   66755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:23:48.293121   66755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:23:48.293153   66755 node_conditions.go:123] node cpu capacity is 2
	I1004 04:23:48.293166   66755 node_conditions.go:105] duration metric: took 4.895489ms to run NodePressure ...
	I1004 04:23:48.293184   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:48.633398   66755 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:23:48.639243   66755 kubeadm.go:739] kubelet initialised
	I1004 04:23:48.639282   66755 kubeadm.go:740] duration metric: took 5.842777ms waiting for restarted kubelet to initialise ...
	I1004 04:23:48.639293   66755 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:23:48.650460   66755 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:49.535979   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:49.536361   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:49.536388   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:49.536332   68093 retry.go:31] will retry after 4.242056709s: waiting for machine to come up
	I1004 04:23:50.657094   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:52.657717   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:55.089234   67541 start.go:364] duration metric: took 2m31.706739813s to acquireMachinesLock for "default-k8s-diff-port-281471"
	I1004 04:23:55.089300   67541 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:55.089311   67541 fix.go:54] fixHost starting: 
	I1004 04:23:55.089673   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:55.089718   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:55.110154   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I1004 04:23:55.110566   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:55.111001   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:23:55.111025   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:55.111417   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:55.111627   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:23:55.111794   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:23:55.113328   67541 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281471: state=Stopped err=<nil>
	I1004 04:23:55.113356   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	W1004 04:23:55.113537   67541 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:55.115190   67541 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-281471" ...
	I1004 04:23:53.783128   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.783631   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has current primary IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.783669   67282 main.go:141] libmachine: (old-k8s-version-420062) Found IP for machine: 192.168.50.146
	I1004 04:23:53.783684   67282 main.go:141] libmachine: (old-k8s-version-420062) Reserving static IP address...
	I1004 04:23:53.784173   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.784206   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | skip adding static IP to network mk-old-k8s-version-420062 - found existing host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"}
	I1004 04:23:53.784222   67282 main.go:141] libmachine: (old-k8s-version-420062) Reserved static IP address: 192.168.50.146
	I1004 04:23:53.784238   67282 main.go:141] libmachine: (old-k8s-version-420062) Waiting for SSH to be available...
	I1004 04:23:53.784250   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Getting to WaitForSSH function...
	I1004 04:23:53.786551   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.786985   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.787016   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.787207   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH client type: external
	I1004 04:23:53.787244   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa (-rw-------)
	I1004 04:23:53.787285   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:23:53.787301   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | About to run SSH command:
	I1004 04:23:53.787315   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | exit 0
	I1004 04:23:53.916121   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | SSH cmd err, output: <nil>: 
	I1004 04:23:53.916487   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetConfigRaw
	I1004 04:23:53.917200   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:53.919846   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.920295   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.920323   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.920641   67282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/config.json ...
	I1004 04:23:53.920902   67282 machine.go:93] provisionDockerMachine start ...
	I1004 04:23:53.920930   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:53.921137   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:53.923647   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.924000   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.924039   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.924198   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:53.924375   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:53.924508   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:53.924659   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:53.924796   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:53.925024   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:53.925036   67282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:23:54.044565   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:23:54.044595   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.044820   67282 buildroot.go:166] provisioning hostname "old-k8s-version-420062"
	I1004 04:23:54.044837   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.045006   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.047682   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.048032   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.048060   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.048186   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.048376   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.048525   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.048694   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.048853   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.049077   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.049098   67282 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-420062 && echo "old-k8s-version-420062" | sudo tee /etc/hostname
	I1004 04:23:54.183772   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-420062
	
	I1004 04:23:54.183835   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.186969   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.187333   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.187368   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.187754   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.188000   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.188177   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.188334   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.188559   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.188778   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.188803   67282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-420062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-420062/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-420062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:23:54.313827   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:54.313852   67282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:23:54.313896   67282 buildroot.go:174] setting up certificates
	I1004 04:23:54.313913   67282 provision.go:84] configureAuth start
	I1004 04:23:54.313925   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.314208   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:54.317028   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.317378   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.317408   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.317549   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.320292   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.320690   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.320718   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.320874   67282 provision.go:143] copyHostCerts
	I1004 04:23:54.320945   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:23:54.320957   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:23:54.321020   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:23:54.321144   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:23:54.321157   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:23:54.321184   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:23:54.321269   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:23:54.321279   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:23:54.321306   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:23:54.321378   67282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-420062 san=[127.0.0.1 192.168.50.146 localhost minikube old-k8s-version-420062]
	I1004 04:23:54.395370   67282 provision.go:177] copyRemoteCerts
	I1004 04:23:54.395422   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:23:54.395452   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.398647   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.399153   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.399194   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.399392   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.399582   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.399852   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.399991   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:54.491055   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:23:54.523206   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1004 04:23:54.549843   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:23:54.580403   67282 provision.go:87] duration metric: took 266.475364ms to configureAuth
	I1004 04:23:54.580438   67282 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:23:54.580645   67282 config.go:182] Loaded profile config "old-k8s-version-420062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1004 04:23:54.580736   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.583200   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.583489   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.583522   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.583672   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.583871   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.584066   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.584195   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.584402   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.584567   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.584582   67282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:23:54.835402   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:23:54.835436   67282 machine.go:96] duration metric: took 914.509404ms to provisionDockerMachine
	I1004 04:23:54.835451   67282 start.go:293] postStartSetup for "old-k8s-version-420062" (driver="kvm2")
	I1004 04:23:54.835466   67282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:23:54.835491   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:54.835870   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:23:54.835902   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.838257   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.838645   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.838670   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.838810   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.838972   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.839117   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.839247   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:54.927041   67282 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:23:54.931330   67282 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:23:54.931357   67282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:23:54.931424   67282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:23:54.931538   67282 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:23:54.931658   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:23:54.941402   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:54.967433   67282 start.go:296] duration metric: took 131.968424ms for postStartSetup
	I1004 04:23:54.967495   67282 fix.go:56] duration metric: took 20.29830643s for fixHost
	I1004 04:23:54.967523   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.970138   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.970485   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.970502   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.970802   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.971000   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.971164   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.971330   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.971560   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.971739   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.971751   67282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:23:55.089031   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015835.056238818
	
	I1004 04:23:55.089054   67282 fix.go:216] guest clock: 1728015835.056238818
	I1004 04:23:55.089063   67282 fix.go:229] Guest: 2024-10-04 04:23:55.056238818 +0000 UTC Remote: 2024-10-04 04:23:54.967501465 +0000 UTC m=+186.499621032 (delta=88.737353ms)
	I1004 04:23:55.089086   67282 fix.go:200] guest clock delta is within tolerance: 88.737353ms
	I1004 04:23:55.089093   67282 start.go:83] releasing machines lock for "old-k8s-version-420062", held for 20.419961099s
	I1004 04:23:55.089124   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.089472   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:55.092047   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.092519   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.092552   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.092784   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093364   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093566   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093670   67282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:23:55.093715   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:55.093808   67282 ssh_runner.go:195] Run: cat /version.json
	I1004 04:23:55.093834   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:55.096451   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.096829   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.096862   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.096881   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.097173   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:55.097364   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:55.097446   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.097474   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.097548   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:55.097685   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:55.097816   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:55.097823   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:55.097953   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:55.098106   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:55.207195   67282 ssh_runner.go:195] Run: systemctl --version
	I1004 04:23:55.214080   67282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:23:55.369882   67282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:23:55.376111   67282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:23:55.376171   67282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:23:55.393916   67282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:23:55.393945   67282 start.go:495] detecting cgroup driver to use...
	I1004 04:23:55.394015   67282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:23:55.411330   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:23:55.427665   67282 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:23:55.427734   67282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:23:55.445180   67282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:23:55.465131   67282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:23:55.596260   67282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:23:55.781647   67282 docker.go:233] disabling docker service ...
	I1004 04:23:55.781711   67282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:23:55.801252   67282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:23:55.817688   67282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:23:55.952563   67282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:23:56.081096   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:23:56.096194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:23:56.116859   67282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1004 04:23:56.116924   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.129060   67282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:23:56.129133   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.141246   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.158759   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.172580   67282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:23:56.192027   67282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:23:56.206698   67282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:23:56.206757   67282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:23:56.223074   67282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:23:56.241061   67282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:56.365616   67282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:23:56.474445   67282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:23:56.474519   67282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:23:56.480077   67282 start.go:563] Will wait 60s for crictl version
	I1004 04:23:56.480133   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:23:56.485207   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:23:56.537710   67282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:23:56.537802   67282 ssh_runner.go:195] Run: crio --version
	I1004 04:23:56.571679   67282 ssh_runner.go:195] Run: crio --version
	I1004 04:23:56.605639   67282 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1004 04:23:55.116525   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Start
	I1004 04:23:55.116723   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring networks are active...
	I1004 04:23:55.117665   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring network default is active
	I1004 04:23:55.118079   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring network mk-default-k8s-diff-port-281471 is active
	I1004 04:23:55.118565   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Getting domain xml...
	I1004 04:23:55.119417   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Creating domain...
	I1004 04:23:56.429715   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting to get IP...
	I1004 04:23:56.430752   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.431261   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.431353   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.431245   68239 retry.go:31] will retry after 200.843618ms: waiting for machine to come up
	I1004 04:23:56.633542   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.633974   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.634003   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.633923   68239 retry.go:31] will retry after 291.906374ms: waiting for machine to come up
	I1004 04:23:56.927325   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.927847   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.927880   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.927813   68239 retry.go:31] will retry after 374.509137ms: waiting for machine to come up
	I1004 04:23:57.304251   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.304713   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.304738   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:57.304671   68239 retry.go:31] will retry after 583.046975ms: waiting for machine to come up
	I1004 04:23:57.889410   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.889837   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.889868   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:57.889795   68239 retry.go:31] will retry after 549.483036ms: waiting for machine to come up
	I1004 04:23:56.606945   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:56.610421   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:56.610952   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:56.610976   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:56.611373   67282 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1004 04:23:56.615872   67282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:56.629783   67282 kubeadm.go:883] updating cluster {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:23:56.629932   67282 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 04:23:56.629983   67282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:56.690260   67282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:23:56.690343   67282 ssh_runner.go:195] Run: which lz4
	I1004 04:23:56.695808   67282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:23:56.701593   67282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:23:56.701623   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1004 04:23:54.156612   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"True"
	I1004 04:23:54.156637   66755 pod_ready.go:82] duration metric: took 5.506141622s for pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:54.156646   66755 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:56.164534   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:58.166994   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:58.440643   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:58.441080   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:58.441109   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:58.441034   68239 retry.go:31] will retry after 585.437747ms: waiting for machine to come up
	I1004 04:23:59.027951   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.028414   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.028441   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:59.028369   68239 retry.go:31] will retry after 773.32668ms: waiting for machine to come up
	I1004 04:23:59.803329   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.803752   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.803793   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:59.803722   68239 retry.go:31] will retry after 936.396482ms: waiting for machine to come up
	I1004 04:24:00.741805   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:00.742328   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:00.742372   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:00.742262   68239 retry.go:31] will retry after 1.294836266s: waiting for machine to come up
	I1004 04:24:02.038222   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:02.038750   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:02.038785   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:02.038699   68239 retry.go:31] will retry after 2.282660025s: waiting for machine to come up
	I1004 04:23:58.525796   67282 crio.go:462] duration metric: took 1.830039762s to copy over tarball
	I1004 04:23:58.525868   67282 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:24:01.514552   67282 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.98865618s)
	I1004 04:24:01.514585   67282 crio.go:469] duration metric: took 2.988759159s to extract the tarball
	I1004 04:24:01.514595   67282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:24:01.562130   67282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:01.598856   67282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:24:01.598882   67282 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:24:01.598960   67282 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:01.599035   67282 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.599047   67282 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.599048   67282 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1004 04:24:01.599020   67282 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.599025   67282 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.598967   67282 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.598967   67282 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.600760   67282 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.600772   67282 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1004 04:24:01.600767   67282 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:01.600791   67282 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.600802   67282 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.600804   67282 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.600807   67282 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.600840   67282 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.837527   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.877366   67282 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1004 04:24:01.877413   67282 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.877464   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:01.882328   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.914693   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.934055   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.941737   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.943929   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.944540   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.948337   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.970977   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.995537   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1004 04:24:02.127073   67282 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1004 04:24:02.127097   67282 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1004 04:24:02.127121   67282 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.127121   67282 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.127156   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.127159   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128471   67282 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1004 04:24:02.128532   67282 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.128535   67282 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1004 04:24:02.128560   67282 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.128571   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128595   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128598   67282 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1004 04:24:02.128627   67282 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.128669   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128730   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1004 04:24:02.128761   67282 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1004 04:24:02.128783   67282 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1004 04:24:02.128815   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.133675   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.133724   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.141911   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.141950   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.141989   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.142044   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.263733   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.263744   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.263798   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.265990   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.297523   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.297566   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.379282   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.379318   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.379331   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.417271   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.454521   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.454559   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.496644   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1004 04:24:02.533632   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1004 04:24:02.533690   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1004 04:24:02.533750   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1004 04:24:02.568138   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1004 04:24:02.568153   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1004 04:24:02.911933   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:03.055844   67282 cache_images.go:92] duration metric: took 1.456943316s to LoadCachedImages
	W1004 04:24:03.055959   67282 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1004 04:24:03.055976   67282 kubeadm.go:934] updating node { 192.168.50.146 8443 v1.20.0 crio true true} ...
	I1004 04:24:03.056087   67282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-420062 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:03.056162   67282 ssh_runner.go:195] Run: crio config
	I1004 04:24:03.103752   67282 cni.go:84] Creating CNI manager for ""
	I1004 04:24:03.103792   67282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:03.103805   67282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:03.103826   67282 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.146 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-420062 NodeName:old-k8s-version-420062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1004 04:24:03.103952   67282 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-420062"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:03.104008   67282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1004 04:24:03.114316   67282 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:03.114372   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:03.124059   67282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1004 04:24:03.143310   67282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:03.161143   67282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1004 04:24:03.178444   67282 ssh_runner.go:195] Run: grep 192.168.50.146	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:03.182235   67282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:03.195103   67282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:03.317820   67282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:03.334820   67282 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062 for IP: 192.168.50.146
	I1004 04:24:03.334840   67282 certs.go:194] generating shared ca certs ...
	I1004 04:24:03.334855   67282 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:03.335008   67282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:03.335049   67282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:03.335059   67282 certs.go:256] generating profile certs ...
	I1004 04:24:03.335156   67282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.key
	I1004 04:24:03.335212   67282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key.c1f9ed6b
	I1004 04:24:03.335260   67282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key
	I1004 04:24:03.335368   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:03.335394   67282 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:03.335401   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:03.335426   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:03.335451   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:03.335476   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:03.335518   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:03.336260   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:03.373985   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:03.408150   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:03.444219   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:03.493160   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1004 04:24:00.665171   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:02.815874   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:04.022715   66755 pod_ready.go:93] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.022744   66755 pod_ready.go:82] duration metric: took 9.866089641s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.022756   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.028094   66755 pod_ready.go:93] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.028115   66755 pod_ready.go:82] duration metric: took 5.350911ms for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.028123   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.033106   66755 pod_ready.go:93] pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.033124   66755 pod_ready.go:82] duration metric: took 4.995208ms for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.033132   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9qpgb" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.037388   66755 pod_ready.go:93] pod "kube-proxy-9qpgb" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.037409   66755 pod_ready.go:82] duration metric: took 4.270278ms for pod "kube-proxy-9qpgb" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.037420   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.042717   66755 pod_ready.go:93] pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.042737   66755 pod_ready.go:82] duration metric: took 5.30887ms for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.042747   66755 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.324259   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:04.324749   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:04.324811   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:04.324726   68239 retry.go:31] will retry after 2.070089599s: waiting for machine to come up
	I1004 04:24:06.396547   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:06.396991   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:06.397015   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:06.396944   68239 retry.go:31] will retry after 3.403718824s: waiting for machine to come up
	I1004 04:24:03.533084   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:24:03.565405   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:03.613938   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:24:03.642711   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:03.674784   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:03.706968   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:03.731329   67282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:03.749003   67282 ssh_runner.go:195] Run: openssl version
	I1004 04:24:03.755219   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:03.766499   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.771322   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.771413   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.778185   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:03.790581   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:03.802556   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.807312   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.807373   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.813595   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:03.825043   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:03.835389   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.840004   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.840051   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.847540   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:03.862303   67282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:03.868029   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:03.874811   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:03.880797   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:03.886622   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:03.892273   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:03.898129   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:03.905775   67282 kubeadm.go:392] StartCluster: {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:03.905852   67282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:03.905890   67282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:03.954627   67282 cri.go:89] found id: ""
	I1004 04:24:03.954702   67282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:03.965146   67282 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:03.965170   67282 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:03.965236   67282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:03.975404   67282 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:03.976362   67282 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-420062" does not appear in /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:24:03.976990   67282 kubeconfig.go:62] /home/jenkins/minikube-integration/19546-9647/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-420062" cluster setting kubeconfig missing "old-k8s-version-420062" context setting]
	I1004 04:24:03.977906   67282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:03.979485   67282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:03.989487   67282 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.146
	I1004 04:24:03.989517   67282 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:03.989529   67282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:03.989577   67282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:04.031536   67282 cri.go:89] found id: ""
	I1004 04:24:04.031607   67282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:04.048652   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:04.057813   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:04.057830   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:04.057867   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:24:04.066213   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:04.066252   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:04.074904   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:24:04.083485   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:04.083522   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:04.092314   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:24:04.100528   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:04.100572   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:04.109232   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:24:04.118051   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:04.118091   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:04.127430   67282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:04.137949   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:04.272627   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:04.940435   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.181288   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.268873   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.373549   67282 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:05.373653   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:05.874543   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.374154   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.874343   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:07.374744   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:07.874734   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:08.374255   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.050700   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:08.548473   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:09.802504   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:09.802912   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:09.802937   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:09.802870   68239 retry.go:31] will retry after 3.430575602s: waiting for machine to come up
	I1004 04:24:13.236792   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.237230   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Found IP for machine: 192.168.39.201
	I1004 04:24:13.237251   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Reserving static IP address...
	I1004 04:24:13.237268   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has current primary IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.237712   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-281471", mac: "52:54:00:cd:36:92", ip: "192.168.39.201"} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.237745   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Reserved static IP address: 192.168.39.201
	I1004 04:24:13.237765   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | skip adding static IP to network mk-default-k8s-diff-port-281471 - found existing host DHCP lease matching {name: "default-k8s-diff-port-281471", mac: "52:54:00:cd:36:92", ip: "192.168.39.201"}
	I1004 04:24:13.237786   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Getting to WaitForSSH function...
	I1004 04:24:13.237805   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for SSH to be available...
	I1004 04:24:13.240068   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.240354   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.240384   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.240514   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Using SSH client type: external
	I1004 04:24:13.240540   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa (-rw-------)
	I1004 04:24:13.240577   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:24:13.240594   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | About to run SSH command:
	I1004 04:24:13.240608   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | exit 0
	I1004 04:24:08.874627   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:09.374627   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:09.874278   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.374675   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.873949   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:11.373966   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:11.873775   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:12.373874   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:12.874010   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:13.374575   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.550171   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:13.049596   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:14.741098   66293 start.go:364] duration metric: took 53.770546651s to acquireMachinesLock for "no-preload-658545"
	I1004 04:24:14.741156   66293 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:24:14.741164   66293 fix.go:54] fixHost starting: 
	I1004 04:24:14.741565   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:14.741595   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:14.758364   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37947
	I1004 04:24:14.758823   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:14.759356   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:24:14.759383   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:14.759700   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:14.759895   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:14.760077   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:24:14.761849   66293 fix.go:112] recreateIfNeeded on no-preload-658545: state=Stopped err=<nil>
	I1004 04:24:14.761873   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	W1004 04:24:14.762037   66293 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:24:14.764123   66293 out.go:177] * Restarting existing kvm2 VM for "no-preload-658545" ...
	I1004 04:24:13.371830   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | SSH cmd err, output: <nil>: 
	I1004 04:24:13.372219   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetConfigRaw
	I1004 04:24:13.372817   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:13.375676   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.376080   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.376116   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.376393   67541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/config.json ...
	I1004 04:24:13.376616   67541 machine.go:93] provisionDockerMachine start ...
	I1004 04:24:13.376638   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:13.376845   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.379413   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.379847   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.379908   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.380015   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.380204   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.380360   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.380493   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.380657   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.380913   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.380988   67541 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:24:13.492488   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:24:13.492528   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.492749   67541 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281471"
	I1004 04:24:13.492768   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.492928   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.495691   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.496003   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.496031   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.496160   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.496368   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.496530   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.496651   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.496785   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.497017   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.497034   67541 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-281471 && echo "default-k8s-diff-port-281471" | sudo tee /etc/hostname
	I1004 04:24:13.627336   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-281471
	
	I1004 04:24:13.627364   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.630757   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.631162   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.631199   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.631486   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.631701   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.631874   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.632018   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.632216   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.632431   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.632457   67541 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-281471' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-281471/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-281471' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:24:13.758386   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:24:13.758413   67541 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:24:13.758462   67541 buildroot.go:174] setting up certificates
	I1004 04:24:13.758472   67541 provision.go:84] configureAuth start
	I1004 04:24:13.758484   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.758740   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:13.761590   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.761899   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.761939   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.762068   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.764293   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.764644   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.764672   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.764811   67541 provision.go:143] copyHostCerts
	I1004 04:24:13.764869   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:24:13.764880   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:24:13.764936   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:24:13.765046   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:24:13.765055   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:24:13.765075   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:24:13.765127   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:24:13.765135   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:24:13.765160   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:24:13.765235   67541 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-281471 san=[127.0.0.1 192.168.39.201 default-k8s-diff-port-281471 localhost minikube]
	I1004 04:24:14.075640   67541 provision.go:177] copyRemoteCerts
	I1004 04:24:14.075698   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:24:14.075722   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.078293   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.078658   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.078689   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.078827   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.079048   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.079213   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.079348   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.167232   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:24:14.193065   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1004 04:24:14.218112   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:24:14.243281   67541 provision.go:87] duration metric: took 484.783764ms to configureAuth
	I1004 04:24:14.243310   67541 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:24:14.243506   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:14.243593   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.246497   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.246837   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.246885   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.247019   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.247211   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.247384   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.247551   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.247719   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:14.247909   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:14.247923   67541 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:24:14.487651   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:24:14.487675   67541 machine.go:96] duration metric: took 1.11104473s to provisionDockerMachine
	I1004 04:24:14.487686   67541 start.go:293] postStartSetup for "default-k8s-diff-port-281471" (driver="kvm2")
	I1004 04:24:14.487696   67541 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:24:14.487733   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.488084   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:24:14.488114   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.490844   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.491198   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.491229   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.491372   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.491562   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.491700   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.491815   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.579398   67541 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:24:14.584068   67541 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:24:14.584098   67541 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:24:14.584179   67541 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:24:14.584274   67541 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:24:14.584379   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:24:14.594853   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:14.621833   67541 start.go:296] duration metric: took 134.135256ms for postStartSetup
	I1004 04:24:14.621874   67541 fix.go:56] duration metric: took 19.532563115s for fixHost
	I1004 04:24:14.621895   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.625077   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.625418   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.625443   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.625678   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.625900   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.626059   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.626205   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.626373   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:14.626589   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:14.626603   67541 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:24:14.740932   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015854.697826512
	
	I1004 04:24:14.740950   67541 fix.go:216] guest clock: 1728015854.697826512
	I1004 04:24:14.740957   67541 fix.go:229] Guest: 2024-10-04 04:24:14.697826512 +0000 UTC Remote: 2024-10-04 04:24:14.621877739 +0000 UTC m=+171.379203860 (delta=75.948773ms)
	I1004 04:24:14.741000   67541 fix.go:200] guest clock delta is within tolerance: 75.948773ms
	I1004 04:24:14.741007   67541 start.go:83] releasing machines lock for "default-k8s-diff-port-281471", held for 19.651737082s
	I1004 04:24:14.741031   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.741291   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:14.744142   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.744498   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.744518   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.744720   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745368   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745559   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745665   67541 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:24:14.745706   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.745802   67541 ssh_runner.go:195] Run: cat /version.json
	I1004 04:24:14.745843   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.748443   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748779   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.748813   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748838   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748927   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.749064   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.749245   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.749267   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.749283   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.749441   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.749481   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.749589   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.749725   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.749856   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.833632   67541 ssh_runner.go:195] Run: systemctl --version
	I1004 04:24:14.863812   67541 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:24:15.016823   67541 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:24:15.023613   67541 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:24:15.023696   67541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:24:15.042546   67541 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:24:15.042576   67541 start.go:495] detecting cgroup driver to use...
	I1004 04:24:15.042645   67541 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:24:15.060267   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:24:15.076088   67541 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:24:15.076155   67541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:24:15.091741   67541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:24:15.107153   67541 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:24:15.230591   67541 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:24:15.381704   67541 docker.go:233] disabling docker service ...
	I1004 04:24:15.381776   67541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:24:15.397616   67541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:24:15.412350   67541 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:24:15.569525   67541 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:24:15.690120   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:24:15.705348   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:24:15.728253   67541 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:24:15.728334   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.739875   67541 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:24:15.739951   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.751997   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.765898   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.777917   67541 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:24:15.791235   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.802390   67541 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.825385   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.837278   67541 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:24:15.848791   67541 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:24:15.848864   67541 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:24:15.870774   67541 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:24:15.883544   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:15.997406   67541 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:24:16.095391   67541 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:24:16.095508   67541 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:24:16.102427   67541 start.go:563] Will wait 60s for crictl version
	I1004 04:24:16.102510   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:24:16.106958   67541 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:24:16.150721   67541 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:24:16.150824   67541 ssh_runner.go:195] Run: crio --version
	I1004 04:24:16.181714   67541 ssh_runner.go:195] Run: crio --version
	I1004 04:24:16.214202   67541 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:24:16.215583   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:16.218418   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:16.218800   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:16.218831   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:16.219002   67541 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 04:24:16.223382   67541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:16.236443   67541 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:24:16.236565   67541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:24:16.236652   67541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:16.279095   67541 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:24:16.279158   67541 ssh_runner.go:195] Run: which lz4
	I1004 04:24:16.283684   67541 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:24:16.288436   67541 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:24:16.288472   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:24:17.853549   67541 crio.go:462] duration metric: took 1.569889689s to copy over tarball
	I1004 04:24:17.853631   67541 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:24:14.765651   66293 main.go:141] libmachine: (no-preload-658545) Calling .Start
	I1004 04:24:14.765886   66293 main.go:141] libmachine: (no-preload-658545) Ensuring networks are active...
	I1004 04:24:14.766761   66293 main.go:141] libmachine: (no-preload-658545) Ensuring network default is active
	I1004 04:24:14.767179   66293 main.go:141] libmachine: (no-preload-658545) Ensuring network mk-no-preload-658545 is active
	I1004 04:24:14.767706   66293 main.go:141] libmachine: (no-preload-658545) Getting domain xml...
	I1004 04:24:14.768478   66293 main.go:141] libmachine: (no-preload-658545) Creating domain...
	I1004 04:24:16.087556   66293 main.go:141] libmachine: (no-preload-658545) Waiting to get IP...
	I1004 04:24:16.088628   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.089032   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.089093   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.089008   68422 retry.go:31] will retry after 276.442313ms: waiting for machine to come up
	I1004 04:24:16.367448   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.367923   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.367953   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.367894   68422 retry.go:31] will retry after 291.504157ms: waiting for machine to come up
	I1004 04:24:16.661396   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.661958   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.662009   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.661932   68422 retry.go:31] will retry after 378.34293ms: waiting for machine to come up
	I1004 04:24:17.041431   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:17.041942   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:17.041970   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:17.041916   68422 retry.go:31] will retry after 553.613866ms: waiting for machine to come up
	I1004 04:24:17.596745   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:17.597294   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:17.597327   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:17.597259   68422 retry.go:31] will retry after 611.098402ms: waiting for machine to come up
	I1004 04:24:18.210083   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:18.210569   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:18.210592   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:18.210530   68422 retry.go:31] will retry after 691.8822ms: waiting for machine to come up
	I1004 04:24:13.873857   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:14.374241   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:14.873863   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.374063   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.873950   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:16.373819   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:16.874290   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:17.374357   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:17.874163   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:18.374160   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.049926   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:17.051060   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:20.132987   67541 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.279324141s)
	I1004 04:24:20.133023   67541 crio.go:469] duration metric: took 2.279442603s to extract the tarball
	I1004 04:24:20.133033   67541 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:24:20.171805   67541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:20.217431   67541 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:24:20.217458   67541 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:24:20.217468   67541 kubeadm.go:934] updating node { 192.168.39.201 8444 v1.31.1 crio true true} ...
	I1004 04:24:20.217586   67541 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-281471 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:20.217687   67541 ssh_runner.go:195] Run: crio config
	I1004 04:24:20.269529   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:24:20.269559   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:20.269569   67541 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:20.269604   67541 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.201 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-281471 NodeName:default-k8s-diff-port-281471 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:24:20.269822   67541 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.201
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-281471"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:20.269913   67541 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:24:20.281286   67541 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:20.281368   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:20.292186   67541 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1004 04:24:20.310972   67541 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:20.329420   67541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1004 04:24:20.348358   67541 ssh_runner.go:195] Run: grep 192.168.39.201	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:20.352641   67541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:20.366317   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:20.499648   67541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:20.518930   67541 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471 for IP: 192.168.39.201
	I1004 04:24:20.518954   67541 certs.go:194] generating shared ca certs ...
	I1004 04:24:20.518971   67541 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:20.519121   67541 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:20.519167   67541 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:20.519177   67541 certs.go:256] generating profile certs ...
	I1004 04:24:20.519279   67541 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/client.key
	I1004 04:24:20.519347   67541 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.key.6cd63ef9
	I1004 04:24:20.519381   67541 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.key
	I1004 04:24:20.519492   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:20.519527   67541 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:20.519539   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:20.519570   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:20.519614   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:20.519643   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:20.519710   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:20.520418   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:20.566110   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:20.613646   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:20.648416   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:20.678840   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1004 04:24:20.722021   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:24:20.749381   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:20.776777   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 04:24:20.803998   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:20.833182   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:20.859600   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:20.887732   67541 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:20.910566   67541 ssh_runner.go:195] Run: openssl version
	I1004 04:24:20.917151   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:20.930475   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.935819   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.935895   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.942607   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:20.954950   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:20.967348   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.972468   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.972543   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.979061   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:20.992010   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:21.008370   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.015101   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.015161   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.023491   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:21.035766   67541 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:21.041416   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:21.048405   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:21.055468   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:21.062228   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:21.068967   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:21.075984   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:21.086088   67541 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:21.086196   67541 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:21.086253   67541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:21.131997   67541 cri.go:89] found id: ""
	I1004 04:24:21.132061   67541 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:21.145219   67541 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:21.145237   67541 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:21.145289   67541 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:21.157041   67541 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:21.158724   67541 kubeconfig.go:125] found "default-k8s-diff-port-281471" server: "https://192.168.39.201:8444"
	I1004 04:24:21.162295   67541 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:21.173771   67541 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.201
	I1004 04:24:21.173806   67541 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:21.173820   67541 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:21.173891   67541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:21.215149   67541 cri.go:89] found id: ""
	I1004 04:24:21.215216   67541 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:21.234432   67541 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:21.245688   67541 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:21.245707   67541 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:21.245758   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1004 04:24:21.256101   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:21.256168   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:21.267319   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1004 04:24:21.279995   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:21.280050   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:21.292588   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1004 04:24:21.304478   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:21.304545   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:21.317012   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1004 04:24:21.328769   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:21.328853   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:21.341597   67541 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:21.353901   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:21.483705   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.340208   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.582628   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.662202   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.773206   67541 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:22.773327   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.274151   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:18.903981   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:18.904373   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:18.904398   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:18.904331   68422 retry.go:31] will retry after 1.022635653s: waiting for machine to come up
	I1004 04:24:19.929163   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:19.929707   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:19.929749   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:19.929656   68422 retry.go:31] will retry after 939.130061ms: waiting for machine to come up
	I1004 04:24:20.870067   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:20.870578   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:20.870606   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:20.870521   68422 retry.go:31] will retry after 1.673919202s: waiting for machine to come up
	I1004 04:24:22.546229   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:22.546621   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:22.546650   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:22.546569   68422 retry.go:31] will retry after 1.962556159s: waiting for machine to come up
	I1004 04:24:18.874214   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.374670   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.874355   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:20.374335   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:20.874299   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:21.374492   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:21.874293   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:22.373890   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:22.874622   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.374639   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.552128   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:22.050844   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:24.051071   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:23.774477   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.807536   67541 api_server.go:72] duration metric: took 1.034328656s to wait for apiserver process to appear ...
	I1004 04:24:23.807569   67541 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:24:23.807593   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.646266   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:26.646299   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:26.646319   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.696828   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:26.696856   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:26.808107   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.819887   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:26.819947   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:27.308535   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:27.317320   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:27.317372   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:27.807868   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:27.817762   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:27.817805   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:28.307660   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:28.313515   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 200:
	ok
	I1004 04:24:28.320539   67541 api_server.go:141] control plane version: v1.31.1
	I1004 04:24:28.320568   67541 api_server.go:131] duration metric: took 4.512991081s to wait for apiserver health ...
	I1004 04:24:28.320578   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:24:28.320586   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:28.322138   67541 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:24:24.511356   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:24.511886   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:24.511917   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:24.511843   68422 retry.go:31] will retry after 2.5950382s: waiting for machine to come up
	I1004 04:24:27.109018   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:27.109474   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:27.109503   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:27.109451   68422 retry.go:31] will retry after 2.984182925s: waiting for machine to come up
	I1004 04:24:23.873822   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:24.373911   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:24.874756   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:25.374035   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:25.873874   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.374503   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.874371   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:27.374335   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:27.873941   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:28.373861   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.550974   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:28.552007   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:28.323513   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:24:28.336556   67541 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:24:28.358371   67541 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:24:28.373163   67541 system_pods.go:59] 8 kube-system pods found
	I1004 04:24:28.373204   67541 system_pods.go:61] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:24:28.373217   67541 system_pods.go:61] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:24:28.373228   67541 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:24:28.373239   67541 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:24:28.373246   67541 system_pods.go:61] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:24:28.373256   67541 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:24:28.373267   67541 system_pods.go:61] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:24:28.373273   67541 system_pods.go:61] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:24:28.373283   67541 system_pods.go:74] duration metric: took 14.891267ms to wait for pod list to return data ...
	I1004 04:24:28.373294   67541 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:24:28.378226   67541 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:24:28.378269   67541 node_conditions.go:123] node cpu capacity is 2
	I1004 04:24:28.378285   67541 node_conditions.go:105] duration metric: took 4.985167ms to run NodePressure ...
	I1004 04:24:28.378309   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:28.649369   67541 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:24:28.654563   67541 kubeadm.go:739] kubelet initialised
	I1004 04:24:28.654584   67541 kubeadm.go:740] duration metric: took 5.188927ms waiting for restarted kubelet to initialise ...
	I1004 04:24:28.654591   67541 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:28.662152   67541 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.668248   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.668278   67541 pod_ready.go:82] duration metric: took 6.099746ms for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.668287   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.668294   67541 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.675790   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.675811   67541 pod_ready.go:82] duration metric: took 7.509617ms for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.675823   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.675830   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.683763   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.683811   67541 pod_ready.go:82] duration metric: took 7.972006ms for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.683830   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.683839   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.761974   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.762006   67541 pod_ready.go:82] duration metric: took 78.154275ms for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.762021   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.762030   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.162590   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-proxy-4nnld" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.162623   67541 pod_ready.go:82] duration metric: took 400.583388ms for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.162634   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-proxy-4nnld" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.162643   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.562557   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.562584   67541 pod_ready.go:82] duration metric: took 399.929497ms for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.562595   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.562602   67541 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.963502   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.963528   67541 pod_ready.go:82] duration metric: took 400.919452ms for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.963539   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.963547   67541 pod_ready.go:39] duration metric: took 1.308947485s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:29.963561   67541 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:24:29.976241   67541 ops.go:34] apiserver oom_adj: -16
	I1004 04:24:29.976268   67541 kubeadm.go:597] duration metric: took 8.831025549s to restartPrimaryControlPlane
	I1004 04:24:29.976278   67541 kubeadm.go:394] duration metric: took 8.890203906s to StartCluster
	I1004 04:24:29.976295   67541 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:29.976372   67541 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:24:29.977898   67541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:29.978168   67541 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:24:29.978222   67541 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:24:29.978306   67541 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978330   67541 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-281471"
	W1004 04:24:29.978341   67541 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:24:29.978329   67541 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978353   67541 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978369   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:29.978367   67541 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-281471"
	I1004 04:24:29.978377   67541 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-281471"
	W1004 04:24:29.978387   67541 addons.go:243] addon metrics-server should already be in state true
	I1004 04:24:29.978413   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:29.978464   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:29.978731   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978783   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.978818   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978871   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.978839   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978970   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.979903   67541 out.go:177] * Verifying Kubernetes components...
	I1004 04:24:29.981432   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:29.994332   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42631
	I1004 04:24:29.994917   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:29.995488   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:29.995503   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:29.995865   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:29.996675   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:29.999180   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I1004 04:24:29.999220   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I1004 04:24:29.999564   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:29.999651   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.000157   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.000182   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.000262   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.000281   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.000379   67541 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-281471"
	W1004 04:24:30.000398   67541 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:24:30.000429   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:30.000613   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.000646   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.000790   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.000812   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.001163   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.001215   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.001259   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.001307   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.016576   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I1004 04:24:30.016650   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41997
	I1004 04:24:30.016796   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44507
	I1004 04:24:30.016993   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017079   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017138   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017536   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017557   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017548   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017584   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017537   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017621   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017929   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.017931   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.017970   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.018100   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.018152   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.018559   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.018600   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.020021   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.020637   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.022016   67541 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:30.022018   67541 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:24:30.023395   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:24:30.023417   67541 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:24:30.023444   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.023489   67541 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:24:30.023506   67541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:24:30.023528   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.027678   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028005   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028129   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.028180   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028538   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.028552   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.028560   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028724   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.028750   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.028881   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.028911   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.029013   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.029055   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.029124   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.037309   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46395
	I1004 04:24:30.037846   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.038328   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.038355   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.038683   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.038850   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.040366   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.040572   67541 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:24:30.040586   67541 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:24:30.040602   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.043618   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.044070   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.044092   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.044232   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.044413   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.044541   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.044687   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.194435   67541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:30.223577   67541 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-281471" to be "Ready" ...
	I1004 04:24:30.277458   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:24:30.316201   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:24:30.316227   67541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:24:30.333635   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:24:30.346511   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:24:30.346549   67541 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:24:30.405197   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:24:30.405219   67541 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:24:30.465174   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:24:31.307064   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307137   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307430   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.307442   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.307469   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.307546   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307574   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307691   67541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030198983s)
	I1004 04:24:31.307733   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307747   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307789   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.307811   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.309264   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.309275   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.309281   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.309291   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.309299   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.309538   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.309568   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.309583   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.315635   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.315653   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.315917   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.315933   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.411630   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.411658   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.411934   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.411951   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.411965   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.411983   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.411997   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.412221   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.412261   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.412274   67541 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-281471"
	I1004 04:24:31.412283   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.414267   67541 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1004 04:24:31.415607   67541 addons.go:510] duration metric: took 1.43738386s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1004 04:24:32.227563   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:30.095611   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:30.096032   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:30.096061   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:30.095981   68422 retry.go:31] will retry after 2.833386023s: waiting for machine to come up
	I1004 04:24:32.933027   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.933509   66293 main.go:141] libmachine: (no-preload-658545) Found IP for machine: 192.168.72.54
	I1004 04:24:32.933538   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has current primary IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.933544   66293 main.go:141] libmachine: (no-preload-658545) Reserving static IP address...
	I1004 04:24:32.933950   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "no-preload-658545", mac: "52:54:00:f5:6c:11", ip: "192.168.72.54"} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:32.933970   66293 main.go:141] libmachine: (no-preload-658545) Reserved static IP address: 192.168.72.54
	I1004 04:24:32.933988   66293 main.go:141] libmachine: (no-preload-658545) DBG | skip adding static IP to network mk-no-preload-658545 - found existing host DHCP lease matching {name: "no-preload-658545", mac: "52:54:00:f5:6c:11", ip: "192.168.72.54"}
	I1004 04:24:32.934002   66293 main.go:141] libmachine: (no-preload-658545) DBG | Getting to WaitForSSH function...
	I1004 04:24:32.934016   66293 main.go:141] libmachine: (no-preload-658545) Waiting for SSH to be available...
	I1004 04:24:32.936089   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.936440   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:32.936471   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.936572   66293 main.go:141] libmachine: (no-preload-658545) DBG | Using SSH client type: external
	I1004 04:24:32.936599   66293 main.go:141] libmachine: (no-preload-658545) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa (-rw-------)
	I1004 04:24:32.936637   66293 main.go:141] libmachine: (no-preload-658545) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:24:32.936650   66293 main.go:141] libmachine: (no-preload-658545) DBG | About to run SSH command:
	I1004 04:24:32.936661   66293 main.go:141] libmachine: (no-preload-658545) DBG | exit 0
	I1004 04:24:33.064432   66293 main.go:141] libmachine: (no-preload-658545) DBG | SSH cmd err, output: <nil>: 
	I1004 04:24:33.064791   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetConfigRaw
	I1004 04:24:33.065494   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:33.068038   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.068302   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.068325   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.068580   66293 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/config.json ...
	I1004 04:24:33.068837   66293 machine.go:93] provisionDockerMachine start ...
	I1004 04:24:33.068858   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:33.069072   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.071425   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.071748   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.071819   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.071946   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.072166   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.072332   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.072429   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.072587   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.072799   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.072814   66293 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:24:33.184623   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:24:33.184656   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.184912   66293 buildroot.go:166] provisioning hostname "no-preload-658545"
	I1004 04:24:33.184946   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.185126   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.188804   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.189189   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.189222   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.189419   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.189664   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.189839   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.190002   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.190128   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.190300   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.190313   66293 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-658545 && echo "no-preload-658545" | sudo tee /etc/hostname
	I1004 04:24:33.316349   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-658545
	
	I1004 04:24:33.316381   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.319460   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.319908   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.319945   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.320110   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.320301   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.320475   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.320628   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.320811   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.321031   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.321058   66293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-658545' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-658545/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-658545' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:24:28.874265   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:29.374364   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:29.874581   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:30.373909   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:30.874089   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.374708   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.874696   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:32.374061   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:32.874233   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:33.374290   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.050105   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:33.549870   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:33.444185   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:24:33.444221   66293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:24:33.444246   66293 buildroot.go:174] setting up certificates
	I1004 04:24:33.444257   66293 provision.go:84] configureAuth start
	I1004 04:24:33.444273   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.444569   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:33.447726   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.448137   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.448168   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.448332   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.450903   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.451311   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.451340   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.451479   66293 provision.go:143] copyHostCerts
	I1004 04:24:33.451559   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:24:33.451571   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:24:33.451638   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:24:33.451748   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:24:33.451763   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:24:33.451818   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:24:33.451897   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:24:33.451906   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:24:33.451931   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:24:33.451992   66293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.no-preload-658545 san=[127.0.0.1 192.168.72.54 localhost minikube no-preload-658545]
	I1004 04:24:33.577106   66293 provision.go:177] copyRemoteCerts
	I1004 04:24:33.577160   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:24:33.577183   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.579990   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.580330   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.580359   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.580496   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.580672   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.580810   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.580937   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:33.671123   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:24:33.697805   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1004 04:24:33.725408   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:24:33.751285   66293 provision.go:87] duration metric: took 307.010531ms to configureAuth
	I1004 04:24:33.751315   66293 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:24:33.751553   66293 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:33.751651   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.754476   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.754896   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.754938   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.755087   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.755282   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.755450   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.755592   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.755723   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.755969   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.755987   66293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:24:33.996596   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:24:33.996625   66293 machine.go:96] duration metric: took 927.772762ms to provisionDockerMachine
	I1004 04:24:33.996636   66293 start.go:293] postStartSetup for "no-preload-658545" (driver="kvm2")
	I1004 04:24:33.996645   66293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:24:33.996662   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:33.996958   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:24:33.996981   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.999632   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.000082   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.000111   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.000324   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.000537   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.000733   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.000924   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.089338   66293 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:24:34.094278   66293 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:24:34.094303   66293 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:24:34.094377   66293 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:24:34.094468   66293 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:24:34.094597   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:24:34.105335   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:34.134191   66293 start.go:296] duration metric: took 137.541908ms for postStartSetup
	I1004 04:24:34.134243   66293 fix.go:56] duration metric: took 19.393079344s for fixHost
	I1004 04:24:34.134269   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.137227   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.137599   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.137638   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.137779   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.137978   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.138156   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.138289   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.138459   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:34.138652   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:34.138663   66293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:24:34.250671   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015874.218795126
	
	I1004 04:24:34.250699   66293 fix.go:216] guest clock: 1728015874.218795126
	I1004 04:24:34.250709   66293 fix.go:229] Guest: 2024-10-04 04:24:34.218795126 +0000 UTC Remote: 2024-10-04 04:24:34.134249208 +0000 UTC m=+355.755571497 (delta=84.545918ms)
	I1004 04:24:34.250735   66293 fix.go:200] guest clock delta is within tolerance: 84.545918ms
	I1004 04:24:34.250742   66293 start.go:83] releasing machines lock for "no-preload-658545", held for 19.509615446s
	I1004 04:24:34.250763   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.250965   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:34.254332   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.254720   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.254746   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.254982   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255550   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255745   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255843   66293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:24:34.255907   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.255973   66293 ssh_runner.go:195] Run: cat /version.json
	I1004 04:24:34.255996   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.258802   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259036   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259118   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.259143   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259309   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.259487   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.259538   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.259563   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259633   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.259752   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.259845   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.259891   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.260042   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.260180   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.362345   66293 ssh_runner.go:195] Run: systemctl --version
	I1004 04:24:34.368641   66293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:24:34.527679   66293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:24:34.534212   66293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:24:34.534291   66293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:24:34.553539   66293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:24:34.553570   66293 start.go:495] detecting cgroup driver to use...
	I1004 04:24:34.553638   66293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:24:34.573489   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:24:34.588220   66293 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:24:34.588281   66293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:24:34.606014   66293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:24:34.621246   66293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:24:34.749423   66293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:24:34.915880   66293 docker.go:233] disabling docker service ...
	I1004 04:24:34.915960   66293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:24:34.936625   66293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:24:34.951534   66293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:24:35.089398   66293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:24:35.225269   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:24:35.241006   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:24:35.261586   66293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:24:35.261651   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.273501   66293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:24:35.273571   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.285392   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.296475   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.307774   66293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:24:35.319241   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.330361   66293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.349013   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.360603   66293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:24:35.371516   66293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:24:35.371581   66293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:24:35.387209   66293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:24:35.398144   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:35.528196   66293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:24:35.629120   66293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:24:35.629198   66293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:24:35.634243   66293 start.go:563] Will wait 60s for crictl version
	I1004 04:24:35.634307   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:35.638372   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:24:35.678659   66293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:24:35.678763   66293 ssh_runner.go:195] Run: crio --version
	I1004 04:24:35.715285   66293 ssh_runner.go:195] Run: crio --version
	I1004 04:24:35.751571   66293 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:24:34.228500   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:36.727080   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:37.228706   67541 node_ready.go:49] node "default-k8s-diff-port-281471" has status "Ready":"True"
	I1004 04:24:37.228745   67541 node_ready.go:38] duration metric: took 7.005123712s for node "default-k8s-diff-port-281471" to be "Ready" ...
	I1004 04:24:37.228760   67541 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:37.235256   67541 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:35.752737   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:35.755375   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:35.755763   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:35.755818   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:35.756063   66293 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1004 04:24:35.760601   66293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:35.773870   66293 kubeadm.go:883] updating cluster {Name:no-preload-658545 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:24:35.773970   66293 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:24:35.774001   66293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:35.813619   66293 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:24:35.813650   66293 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:24:35.813736   66293 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:35.813756   66293 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.813785   66293 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1004 04:24:35.813796   66293 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.813877   66293 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:35.813740   66293 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.813758   66293 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.813771   66293 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.815271   66293 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:35.815277   66293 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1004 04:24:35.815292   66293 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.815271   66293 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.815276   66293 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:35.815353   66293 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.815358   66293 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.815402   66293 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.956470   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.963066   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.965110   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.970080   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.972477   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.988253   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.013802   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1004 04:24:36.063322   66293 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1004 04:24:36.063364   66293 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.063405   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214786   66293 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1004 04:24:36.214827   66293 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.214867   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214928   66293 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1004 04:24:36.214961   66293 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1004 04:24:36.214995   66293 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.215023   66293 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1004 04:24:36.215043   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214965   66293 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.215081   66293 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1004 04:24:36.215047   66293 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.215100   66293 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.215110   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.215139   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.215147   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.274103   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.274103   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.274185   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.274292   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.274329   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.274343   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.392523   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.405236   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.405257   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.408799   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.408857   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.408860   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.511001   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.568598   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.568658   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.568720   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.568929   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.569021   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.599594   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1004 04:24:36.599733   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.696242   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1004 04:24:36.696294   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1004 04:24:36.696336   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1004 04:24:36.696363   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:36.696390   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:36.696399   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:36.696401   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1004 04:24:36.696449   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1004 04:24:36.696507   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:36.696521   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:36.696508   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1004 04:24:36.696563   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.696613   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.701522   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1004 04:24:37.132809   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:33.874344   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:34.374158   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:34.873848   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:35.373944   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:35.874697   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.373831   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.874231   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:37.374723   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:37.873861   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:38.374206   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.050420   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:38.051653   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:39.242026   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:41.244977   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:39.289977   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.593422519s)
	I1004 04:24:39.290020   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1004 04:24:39.290087   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.593446646s)
	I1004 04:24:39.290114   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1004 04:24:39.290136   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:39.290158   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.593739386s)
	I1004 04:24:39.290175   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1004 04:24:39.290097   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.593563637s)
	I1004 04:24:39.290203   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.593795645s)
	I1004 04:24:39.290208   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:39.290213   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1004 04:24:39.290213   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1004 04:24:39.290265   66293 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.157417466s)
	I1004 04:24:39.290314   66293 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1004 04:24:39.290348   66293 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:39.290392   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:40.750955   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.460708297s)
	I1004 04:24:40.751065   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1004 04:24:40.751102   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:40.750969   66293 ssh_runner.go:235] Completed: which crictl: (1.460561899s)
	I1004 04:24:40.751159   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:40.751190   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:43.031349   66293 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.280136047s)
	I1004 04:24:43.031395   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.280209115s)
	I1004 04:24:43.031566   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1004 04:24:43.031493   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:43.031600   66293 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:43.031641   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:43.084191   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:38.873705   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:39.374361   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:39.874144   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.373793   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.873796   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:41.374744   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:41.874442   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:42.374561   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:42.874638   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:43.374677   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.548818   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:42.550744   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:43.742554   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:44.244427   67541 pod_ready.go:93] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.244453   67541 pod_ready.go:82] duration metric: took 7.009169057s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.244463   67541 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.250595   67541 pod_ready.go:93] pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.250617   67541 pod_ready.go:82] duration metric: took 6.147481ms for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.250625   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.256537   67541 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.256570   67541 pod_ready.go:82] duration metric: took 5.936641ms for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.256583   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.262681   67541 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.262707   67541 pod_ready.go:82] duration metric: took 6.115804ms for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.262721   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.271089   67541 pod_ready.go:93] pod "kube-proxy-4nnld" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.271124   67541 pod_ready.go:82] duration metric: took 8.394207ms for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.271138   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.640124   67541 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.640158   67541 pod_ready.go:82] duration metric: took 369.009816ms for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.640172   67541 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:46.647420   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:45.132971   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.101305613s)
	I1004 04:24:45.133043   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1004 04:24:45.133071   66293 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.048844025s)
	I1004 04:24:45.133079   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:45.133110   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1004 04:24:45.133135   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:45.133179   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:47.228047   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.094844592s)
	I1004 04:24:47.228087   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1004 04:24:47.228089   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.0949275s)
	I1004 04:24:47.228119   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1004 04:24:47.228154   66293 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:47.228214   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:43.874583   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:44.374117   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:44.874398   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.374755   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.874039   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:46.374598   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:46.874446   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:47.374384   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:47.874596   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:48.374021   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.049760   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:47.551861   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:48.647700   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:50.648288   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:52.649288   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:50.627043   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.398805191s)
	I1004 04:24:50.627085   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1004 04:24:50.627122   66293 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:50.627191   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:51.282056   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1004 04:24:51.282099   66293 cache_images.go:123] Successfully loaded all cached images
	I1004 04:24:51.282104   66293 cache_images.go:92] duration metric: took 15.468441268s to LoadCachedImages
	I1004 04:24:51.282116   66293 kubeadm.go:934] updating node { 192.168.72.54 8443 v1.31.1 crio true true} ...
	I1004 04:24:51.282243   66293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-658545 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:51.282321   66293 ssh_runner.go:195] Run: crio config
	I1004 04:24:51.333133   66293 cni.go:84] Creating CNI manager for ""
	I1004 04:24:51.333162   66293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:51.333173   66293 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:51.333201   66293 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-658545 NodeName:no-preload-658545 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:24:51.333361   66293 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-658545"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:51.333419   66293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:24:51.344694   66293 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:51.344757   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:51.354990   66293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1004 04:24:51.372572   66293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:51.394129   66293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1004 04:24:51.412865   66293 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:51.416985   66293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:51.430835   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:51.559349   66293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:51.579093   66293 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545 for IP: 192.168.72.54
	I1004 04:24:51.579120   66293 certs.go:194] generating shared ca certs ...
	I1004 04:24:51.579140   66293 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:51.579318   66293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:51.579378   66293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:51.579391   66293 certs.go:256] generating profile certs ...
	I1004 04:24:51.579494   66293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/client.key
	I1004 04:24:51.579588   66293 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.key.10ceac04
	I1004 04:24:51.579648   66293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.key
	I1004 04:24:51.579808   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:51.579849   66293 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:51.579861   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:51.579891   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:51.579926   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:51.579961   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:51.580018   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:51.580871   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:51.630190   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:51.667887   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:51.715372   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:51.750063   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1004 04:24:51.776606   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:24:51.808943   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:51.839165   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:24:51.867862   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:51.898026   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:51.926810   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:51.955416   66293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:51.977621   66293 ssh_runner.go:195] Run: openssl version
	I1004 04:24:51.984023   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:51.997672   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.002969   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.003039   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.009473   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:52.021001   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:52.032834   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.037679   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.037742   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.044012   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:52.055377   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:52.066222   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.070747   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.070794   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.076922   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:52.087952   66293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:52.093052   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:52.099710   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:52.105841   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:52.112092   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:52.118428   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:52.125380   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:52.132085   66293 kubeadm.go:392] StartCluster: {Name:no-preload-658545 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:52.132193   66293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:52.132254   66293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:52.171814   66293 cri.go:89] found id: ""
	I1004 04:24:52.171882   66293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:52.182484   66293 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:52.182508   66293 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:52.182559   66293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:52.193069   66293 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:52.194108   66293 kubeconfig.go:125] found "no-preload-658545" server: "https://192.168.72.54:8443"
	I1004 04:24:52.196237   66293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:52.206551   66293 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I1004 04:24:52.206584   66293 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:52.206598   66293 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:52.206657   66293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:52.249698   66293 cri.go:89] found id: ""
	I1004 04:24:52.249762   66293 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:52.266001   66293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:52.276056   66293 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:52.276081   66293 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:52.276128   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:24:52.285610   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:52.285677   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:52.295177   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:24:52.304309   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:52.304362   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:52.314126   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:24:52.323562   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:52.323618   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:52.332906   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:24:52.342199   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:52.342252   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:52.351661   66293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:52.361071   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:52.493171   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:48.874471   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:49.374480   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:49.874689   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.373726   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.874543   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:51.373743   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:51.874513   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:52.374719   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:52.874305   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:53.374419   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.049668   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:52.050522   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:55.147282   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:57.648169   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:53.586422   66293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.093219868s)
	I1004 04:24:53.586448   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:53.794085   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:53.872327   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:54.004418   66293 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:54.004510   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.505463   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.004602   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.036834   66293 api_server.go:72] duration metric: took 1.032414365s to wait for apiserver process to appear ...
	I1004 04:24:55.036858   66293 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:24:55.036877   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:55.037325   66293 api_server.go:269] stopped: https://192.168.72.54:8443/healthz: Get "https://192.168.72.54:8443/healthz": dial tcp 192.168.72.54:8443: connect: connection refused
	I1004 04:24:55.537513   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:57.951637   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:57.951663   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:57.951676   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.010162   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:58.010188   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:58.037484   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.060069   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:58.060161   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:53.874725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.373903   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.874127   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.374051   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.874019   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:56.373828   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:56.874027   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:57.373914   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:57.874598   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:58.374106   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.550080   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:56.550541   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:59.051837   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:58.536932   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.541611   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:58.541634   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:59.037723   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:59.057378   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:59.057411   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:59.536994   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:59.545827   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I1004 04:24:59.554199   66293 api_server.go:141] control plane version: v1.31.1
	I1004 04:24:59.554238   66293 api_server.go:131] duration metric: took 4.517373336s to wait for apiserver health ...
	I1004 04:24:59.554247   66293 cni.go:84] Creating CNI manager for ""
	I1004 04:24:59.554253   66293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:59.555912   66293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:24:59.557009   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:24:59.590146   66293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:24:59.610903   66293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:24:59.634067   66293 system_pods.go:59] 8 kube-system pods found
	I1004 04:24:59.634109   66293 system_pods.go:61] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:24:59.634121   66293 system_pods.go:61] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:24:59.634131   66293 system_pods.go:61] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:24:59.634143   66293 system_pods.go:61] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:24:59.634151   66293 system_pods.go:61] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 04:24:59.634160   66293 system_pods.go:61] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:24:59.634168   66293 system_pods.go:61] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:24:59.634181   66293 system_pods.go:61] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 04:24:59.634189   66293 system_pods.go:74] duration metric: took 23.257716ms to wait for pod list to return data ...
	I1004 04:24:59.634198   66293 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:24:59.638128   66293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:24:59.638160   66293 node_conditions.go:123] node cpu capacity is 2
	I1004 04:24:59.638173   66293 node_conditions.go:105] duration metric: took 3.969841ms to run NodePressure ...
	I1004 04:24:59.638191   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:59.968829   66293 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:24:59.975495   66293 kubeadm.go:739] kubelet initialised
	I1004 04:24:59.975516   66293 kubeadm.go:740] duration metric: took 6.660196ms waiting for restarted kubelet to initialise ...
	I1004 04:24:59.975522   66293 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:00.084084   66293 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.113474   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.113498   66293 pod_ready.go:82] duration metric: took 29.379607ms for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.113507   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.113513   66293 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.128436   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "etcd-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.128463   66293 pod_ready.go:82] duration metric: took 14.94278ms for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.128475   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "etcd-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.128485   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.140033   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-apiserver-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.140059   66293 pod_ready.go:82] duration metric: took 11.56545ms for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.140068   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-apiserver-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.140077   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.157254   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.157286   66293 pod_ready.go:82] duration metric: took 17.197805ms for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.157298   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.157306   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.415110   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-proxy-dvr6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.415141   66293 pod_ready.go:82] duration metric: took 257.824162ms for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.415151   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-proxy-dvr6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.415157   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.815201   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-scheduler-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.815226   66293 pod_ready.go:82] duration metric: took 400.063468ms for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.815235   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-scheduler-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.815241   66293 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:01.214416   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:01.214448   66293 pod_ready.go:82] duration metric: took 399.197779ms for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:01.214461   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:01.214468   66293 pod_ready.go:39] duration metric: took 1.238937842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:01.214484   66293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:25:01.227389   66293 ops.go:34] apiserver oom_adj: -16
	I1004 04:25:01.227414   66293 kubeadm.go:597] duration metric: took 9.044898439s to restartPrimaryControlPlane
	I1004 04:25:01.227424   66293 kubeadm.go:394] duration metric: took 9.095346513s to StartCluster
	I1004 04:25:01.227441   66293 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:25:01.227520   66293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:25:01.229057   66293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:25:01.229318   66293 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:25:01.229389   66293 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:25:01.229496   66293 addons.go:69] Setting storage-provisioner=true in profile "no-preload-658545"
	I1004 04:25:01.229505   66293 addons.go:69] Setting default-storageclass=true in profile "no-preload-658545"
	I1004 04:25:01.229512   66293 addons.go:234] Setting addon storage-provisioner=true in "no-preload-658545"
	W1004 04:25:01.229520   66293 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:25:01.229524   66293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-658545"
	I1004 04:25:01.229558   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.229562   66293 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:25:01.229557   66293 addons.go:69] Setting metrics-server=true in profile "no-preload-658545"
	I1004 04:25:01.229607   66293 addons.go:234] Setting addon metrics-server=true in "no-preload-658545"
	W1004 04:25:01.229621   66293 addons.go:243] addon metrics-server should already be in state true
	I1004 04:25:01.229655   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.229968   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.229987   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.229971   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.230013   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.230030   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.230133   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.231051   66293 out.go:177] * Verifying Kubernetes components...
	I1004 04:25:01.232578   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:25:01.256283   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38313
	I1004 04:25:01.256939   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.257689   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.257720   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.258124   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.258358   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.262593   66293 addons.go:234] Setting addon default-storageclass=true in "no-preload-658545"
	W1004 04:25:01.262620   66293 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:25:01.262652   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.263036   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.263117   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.274653   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I1004 04:25:01.275130   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.275655   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.275685   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.276062   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.276652   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.276697   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.277272   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I1004 04:25:01.277756   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.278175   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.278191   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.278548   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.279116   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.279163   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.283719   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1004 04:25:01.284316   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.284814   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.284836   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.285180   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.285751   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.285801   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.297682   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I1004 04:25:01.297859   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I1004 04:25:01.298298   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.298418   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.298975   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.298995   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.299058   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.299077   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.299407   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.299470   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.299618   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.299660   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.301552   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.302048   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.303197   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1004 04:25:01.303600   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.304053   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.304068   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.304124   66293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:25:01.304234   66293 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:25:01.304403   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.304571   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.305715   66293 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:25:01.305735   66293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:25:01.305850   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:25:01.305861   66293 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:25:01.305876   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.305752   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.306101   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.306321   66293 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:25:01.306334   66293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:25:01.306349   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.310374   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.310752   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.310776   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.310888   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.311057   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.311192   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.311272   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.311338   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.311603   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312049   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.312072   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312175   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.312201   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312302   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.312468   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.312497   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.312586   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.312658   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.312681   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.312811   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.312948   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.478533   66293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:25:01.511716   66293 node_ready.go:35] waiting up to 6m0s for node "no-preload-658545" to be "Ready" ...
	I1004 04:25:01.557879   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:25:01.574381   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:25:01.601090   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:25:01.601112   66293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:25:01.630465   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:25:01.630495   66293 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:25:01.681089   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:25:01.681118   66293 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:25:01.703024   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:25:02.053562   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.053585   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.053855   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.053871   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.053882   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.053891   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.054118   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.054139   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.054128   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.061624   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.061646   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.061949   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.061967   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.061985   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.580950   66293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.00653263s)
	I1004 04:25:02.581002   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.581014   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.581350   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.581368   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.581376   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.581382   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.581459   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.581594   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.581606   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.702713   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.702739   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.703015   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.703028   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.703090   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.703106   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.703117   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.703347   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.703363   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.703380   66293 addons.go:475] Verifying addon metrics-server=true in "no-preload-658545"
	I1004 04:25:02.705335   66293 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 04:24:59.648241   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:01.649424   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:02.706605   66293 addons.go:510] duration metric: took 1.477226s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 04:24:58.874143   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:59.373810   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:59.874682   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:00.374672   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:00.873725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.374175   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.874724   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:02.374725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:02.874746   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:03.373689   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.548783   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:03.549515   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:04.146633   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:06.147540   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.147626   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:03.516566   66293 node_ready.go:53] node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:06.022815   66293 node_ready.go:53] node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:03.874594   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:04.374498   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:04.874377   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:05.374050   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:05.374139   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:05.412153   67282 cri.go:89] found id: ""
	I1004 04:25:05.412185   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.412195   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:05.412202   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:05.412264   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:05.446725   67282 cri.go:89] found id: ""
	I1004 04:25:05.446750   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.446758   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:05.446763   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:05.446816   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:05.487652   67282 cri.go:89] found id: ""
	I1004 04:25:05.487678   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.487686   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:05.487691   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:05.487752   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:05.526275   67282 cri.go:89] found id: ""
	I1004 04:25:05.526302   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.526310   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:05.526319   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:05.526375   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:05.565004   67282 cri.go:89] found id: ""
	I1004 04:25:05.565034   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.565045   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:05.565052   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:05.565101   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:05.601963   67282 cri.go:89] found id: ""
	I1004 04:25:05.601990   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.601998   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:05.602003   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:05.602051   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:05.638621   67282 cri.go:89] found id: ""
	I1004 04:25:05.638651   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.638660   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:05.638666   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:05.638720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:05.678042   67282 cri.go:89] found id: ""
	I1004 04:25:05.678071   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.678082   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:05.678093   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:05.678107   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:05.720677   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:05.720707   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:05.775219   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:05.775252   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:05.789748   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:05.789774   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:05.918752   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:05.918783   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:05.918798   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:08.493206   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:05.551035   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.048870   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:10.148154   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.645708   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.516666   66293 node_ready.go:49] node "no-preload-658545" has status "Ready":"True"
	I1004 04:25:08.516690   66293 node_ready.go:38] duration metric: took 7.004939371s for node "no-preload-658545" to be "Ready" ...
	I1004 04:25:08.516699   66293 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:08.522101   66293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.527132   66293 pod_ready.go:93] pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:08.527153   66293 pod_ready.go:82] duration metric: took 5.024648ms for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.527162   66293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.534172   66293 pod_ready.go:93] pod "etcd-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:08.534195   66293 pod_ready.go:82] duration metric: took 7.027189ms for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.534204   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:10.541186   66293 pod_ready.go:103] pod "kube-apiserver-no-preload-658545" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.040607   66293 pod_ready.go:93] pod "kube-apiserver-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.040640   66293 pod_ready.go:82] duration metric: took 3.506428875s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.040654   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.045845   66293 pod_ready.go:93] pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.045870   66293 pod_ready.go:82] duration metric: took 5.207108ms for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.045883   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.051587   66293 pod_ready.go:93] pod "kube-proxy-dvr6b" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.051604   66293 pod_ready.go:82] duration metric: took 5.715328ms for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.051613   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.116361   66293 pod_ready.go:93] pod "kube-scheduler-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.116401   66293 pod_ready.go:82] duration metric: took 64.774234ms for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.116411   66293 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.506490   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:08.506549   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:08.545875   67282 cri.go:89] found id: ""
	I1004 04:25:08.545909   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.545920   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:08.545933   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:08.545997   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:08.582348   67282 cri.go:89] found id: ""
	I1004 04:25:08.582375   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.582383   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:08.582389   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:08.582438   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:08.637763   67282 cri.go:89] found id: ""
	I1004 04:25:08.637797   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.637809   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:08.637816   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:08.637890   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:08.681171   67282 cri.go:89] found id: ""
	I1004 04:25:08.681205   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.681216   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:08.681224   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:08.681289   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:08.719513   67282 cri.go:89] found id: ""
	I1004 04:25:08.719542   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.719549   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:08.719555   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:08.719607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:08.762152   67282 cri.go:89] found id: ""
	I1004 04:25:08.762175   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.762183   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:08.762188   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:08.762251   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:08.799857   67282 cri.go:89] found id: ""
	I1004 04:25:08.799881   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.799892   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:08.799903   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:08.799954   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:08.835264   67282 cri.go:89] found id: ""
	I1004 04:25:08.835296   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.835308   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:08.835318   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:08.835330   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:08.875501   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:08.875532   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:08.929145   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:08.929178   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:08.942769   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:08.942808   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:09.025372   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:09.025401   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:09.025416   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:11.611179   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:11.625118   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:11.625253   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:11.661512   67282 cri.go:89] found id: ""
	I1004 04:25:11.661540   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.661547   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:11.661553   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:11.661607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:11.704902   67282 cri.go:89] found id: ""
	I1004 04:25:11.704931   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.704941   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:11.704948   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:11.705007   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:11.741747   67282 cri.go:89] found id: ""
	I1004 04:25:11.741770   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.741780   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:11.741787   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:11.741841   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:11.776838   67282 cri.go:89] found id: ""
	I1004 04:25:11.776863   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.776871   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:11.776876   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:11.776927   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:11.812996   67282 cri.go:89] found id: ""
	I1004 04:25:11.813024   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.813033   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:11.813038   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:11.813097   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:11.853718   67282 cri.go:89] found id: ""
	I1004 04:25:11.853744   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.853752   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:11.853758   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:11.853813   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:11.896840   67282 cri.go:89] found id: ""
	I1004 04:25:11.896867   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.896879   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:11.896885   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:11.896943   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:11.932529   67282 cri.go:89] found id: ""
	I1004 04:25:11.932552   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.932561   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:11.932569   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:11.932580   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:11.946504   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:11.946538   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:12.024692   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:12.024713   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:12.024724   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:12.111942   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:12.111976   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:12.156483   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:12.156522   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:10.049912   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.051024   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.646058   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:16.647214   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.123343   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:16.622947   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.708243   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:14.722943   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:14.723007   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:14.758502   67282 cri.go:89] found id: ""
	I1004 04:25:14.758555   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.758567   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:14.758575   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:14.758633   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:14.796496   67282 cri.go:89] found id: ""
	I1004 04:25:14.796525   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.796532   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:14.796538   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:14.796595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:14.832216   67282 cri.go:89] found id: ""
	I1004 04:25:14.832247   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.832259   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:14.832266   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:14.832330   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:14.868461   67282 cri.go:89] found id: ""
	I1004 04:25:14.868491   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.868501   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:14.868509   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:14.868568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:14.909827   67282 cri.go:89] found id: ""
	I1004 04:25:14.909857   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.909867   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:14.909875   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:14.909949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:14.947809   67282 cri.go:89] found id: ""
	I1004 04:25:14.947839   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.947850   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:14.947857   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:14.947904   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:14.984073   67282 cri.go:89] found id: ""
	I1004 04:25:14.984101   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.984110   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:14.984115   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:14.984170   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:15.021145   67282 cri.go:89] found id: ""
	I1004 04:25:15.021179   67282 logs.go:282] 0 containers: []
	W1004 04:25:15.021191   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:15.021204   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:15.021217   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:15.075295   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:15.075328   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:15.088953   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:15.088980   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:15.175103   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:15.175128   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:15.175143   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:15.259004   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:15.259044   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:17.825029   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:17.839496   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:17.839574   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:17.877643   67282 cri.go:89] found id: ""
	I1004 04:25:17.877673   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.877684   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:17.877692   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:17.877751   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:17.921534   67282 cri.go:89] found id: ""
	I1004 04:25:17.921563   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.921574   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:17.921581   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:17.921634   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:17.961281   67282 cri.go:89] found id: ""
	I1004 04:25:17.961307   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.961315   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:17.961320   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:17.961386   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:18.001036   67282 cri.go:89] found id: ""
	I1004 04:25:18.001066   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.001078   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:18.001085   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:18.001156   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:18.043212   67282 cri.go:89] found id: ""
	I1004 04:25:18.043241   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.043252   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:18.043259   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:18.043319   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:18.082399   67282 cri.go:89] found id: ""
	I1004 04:25:18.082423   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.082430   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:18.082435   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:18.082493   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:18.120507   67282 cri.go:89] found id: ""
	I1004 04:25:18.120534   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.120544   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:18.120550   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:18.120605   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:18.156601   67282 cri.go:89] found id: ""
	I1004 04:25:18.156629   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.156640   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:18.156650   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:18.156663   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:18.198393   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:18.198424   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:18.250992   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:18.251032   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:18.267984   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:18.268015   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:18.343283   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:18.343303   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:18.343314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:14.549511   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:17.048940   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:19.051125   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:18.648462   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:21.146813   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.147244   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:18.624165   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:20.627159   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.123629   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:20.922578   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:20.938037   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:20.938122   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:20.978389   67282 cri.go:89] found id: ""
	I1004 04:25:20.978417   67282 logs.go:282] 0 containers: []
	W1004 04:25:20.978426   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:20.978431   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:20.978478   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:21.033490   67282 cri.go:89] found id: ""
	I1004 04:25:21.033520   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.033528   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:21.033533   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:21.033589   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:21.087168   67282 cri.go:89] found id: ""
	I1004 04:25:21.087198   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.087209   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:21.087216   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:21.087299   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:21.144327   67282 cri.go:89] found id: ""
	I1004 04:25:21.144356   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.144366   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:21.144373   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:21.144431   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:21.183336   67282 cri.go:89] found id: ""
	I1004 04:25:21.183378   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.183390   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:21.183397   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:21.183459   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:21.221847   67282 cri.go:89] found id: ""
	I1004 04:25:21.221878   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.221892   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:21.221901   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:21.221961   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:21.258542   67282 cri.go:89] found id: ""
	I1004 04:25:21.258573   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.258584   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:21.258590   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:21.258652   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:21.303173   67282 cri.go:89] found id: ""
	I1004 04:25:21.303202   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.303211   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:21.303218   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:21.303243   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:21.358109   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:21.358146   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:21.373958   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:21.373987   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:21.450956   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:21.450980   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:21.451006   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:21.534763   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:21.534807   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:21.550109   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.550304   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:25.148868   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:27.647698   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:25.622123   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:27.624777   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:24.082856   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:24.098263   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:24.098336   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:24.144969   67282 cri.go:89] found id: ""
	I1004 04:25:24.144999   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.145009   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:24.145015   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:24.145072   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:24.185670   67282 cri.go:89] found id: ""
	I1004 04:25:24.185693   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.185702   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:24.185708   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:24.185769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:24.223657   67282 cri.go:89] found id: ""
	I1004 04:25:24.223691   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.223703   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:24.223710   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:24.223769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:24.261841   67282 cri.go:89] found id: ""
	I1004 04:25:24.261864   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.261872   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:24.261878   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:24.261938   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:24.299734   67282 cri.go:89] found id: ""
	I1004 04:25:24.299758   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.299769   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:24.299775   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:24.299867   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:24.337413   67282 cri.go:89] found id: ""
	I1004 04:25:24.337440   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.337450   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:24.337457   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:24.337523   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:24.375963   67282 cri.go:89] found id: ""
	I1004 04:25:24.375995   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.376007   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:24.376014   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:24.376073   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:24.415978   67282 cri.go:89] found id: ""
	I1004 04:25:24.416010   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.416021   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:24.416030   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:24.416045   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:24.458703   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:24.458738   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:24.510669   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:24.510704   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:24.525646   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:24.525687   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:24.603280   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:24.603310   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:24.603324   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:27.184935   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:27.200241   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:27.200321   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:27.237546   67282 cri.go:89] found id: ""
	I1004 04:25:27.237576   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.237588   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:27.237596   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:27.237653   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:27.272598   67282 cri.go:89] found id: ""
	I1004 04:25:27.272625   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.272634   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:27.272642   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:27.272700   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:27.306659   67282 cri.go:89] found id: ""
	I1004 04:25:27.306693   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.306706   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:27.306715   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:27.306779   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:27.344315   67282 cri.go:89] found id: ""
	I1004 04:25:27.344349   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.344363   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:27.344370   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:27.344428   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:27.380231   67282 cri.go:89] found id: ""
	I1004 04:25:27.380267   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.380278   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:27.380286   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:27.380346   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:27.418137   67282 cri.go:89] found id: ""
	I1004 04:25:27.418161   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.418169   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:27.418174   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:27.418225   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:27.458235   67282 cri.go:89] found id: ""
	I1004 04:25:27.458262   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.458283   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:27.458289   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:27.458342   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:27.495161   67282 cri.go:89] found id: ""
	I1004 04:25:27.495189   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.495198   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:27.495206   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:27.495217   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:27.547749   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:27.547795   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:27.563322   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:27.563355   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:27.636682   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:27.636710   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:27.636725   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:27.711316   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:27.711354   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:26.050001   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:28.548322   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.147210   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:32.647027   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.122267   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:32.122501   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.250361   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:30.265789   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:30.265866   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:30.305127   67282 cri.go:89] found id: ""
	I1004 04:25:30.305166   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.305183   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:30.305190   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:30.305258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:30.346529   67282 cri.go:89] found id: ""
	I1004 04:25:30.346560   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.346570   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:30.346577   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:30.346641   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:30.387368   67282 cri.go:89] found id: ""
	I1004 04:25:30.387407   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.387418   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:30.387425   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:30.387489   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:30.428193   67282 cri.go:89] found id: ""
	I1004 04:25:30.428230   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.428242   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:30.428248   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:30.428308   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:30.465484   67282 cri.go:89] found id: ""
	I1004 04:25:30.465509   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.465518   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:30.465523   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:30.465573   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:30.501133   67282 cri.go:89] found id: ""
	I1004 04:25:30.501163   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.501174   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:30.501181   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:30.501248   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:30.536492   67282 cri.go:89] found id: ""
	I1004 04:25:30.536519   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.536530   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:30.536536   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:30.536587   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:30.571721   67282 cri.go:89] found id: ""
	I1004 04:25:30.571745   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.571753   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:30.571761   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:30.571771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:30.626922   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:30.626958   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:30.641817   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:30.641852   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:30.725604   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:30.725633   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:30.725647   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:30.800359   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:30.800393   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:33.340747   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:33.355862   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:33.355936   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:33.397628   67282 cri.go:89] found id: ""
	I1004 04:25:33.397655   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.397662   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:33.397668   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:33.397718   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:33.442100   67282 cri.go:89] found id: ""
	I1004 04:25:33.442128   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.442137   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:33.442142   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:33.442187   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:33.481035   67282 cri.go:89] found id: ""
	I1004 04:25:33.481063   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.481076   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:33.481083   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:33.481149   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:30.549816   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:33.048791   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:35.147125   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:37.647224   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:34.122573   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:36.622639   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:33.516633   67282 cri.go:89] found id: ""
	I1004 04:25:33.516661   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.516669   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:33.516677   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:33.516727   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:33.556569   67282 cri.go:89] found id: ""
	I1004 04:25:33.556600   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.556610   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:33.556617   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:33.556679   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:33.591678   67282 cri.go:89] found id: ""
	I1004 04:25:33.591715   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.591724   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:33.591731   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:33.591786   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:33.626571   67282 cri.go:89] found id: ""
	I1004 04:25:33.626594   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.626602   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:33.626607   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:33.626650   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:33.664336   67282 cri.go:89] found id: ""
	I1004 04:25:33.664359   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.664367   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:33.664375   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:33.664386   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:33.748013   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:33.748047   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:33.786730   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:33.786767   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:33.839355   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:33.839392   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:33.853807   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:33.853835   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:33.920183   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:36.420485   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:36.435150   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:36.435221   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:36.471818   67282 cri.go:89] found id: ""
	I1004 04:25:36.471842   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.471850   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:36.471855   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:36.471908   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:36.511469   67282 cri.go:89] found id: ""
	I1004 04:25:36.511496   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.511504   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:36.511509   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:36.511557   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:36.552607   67282 cri.go:89] found id: ""
	I1004 04:25:36.552633   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.552641   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:36.552646   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:36.552702   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:36.596260   67282 cri.go:89] found id: ""
	I1004 04:25:36.596282   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.596290   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:36.596295   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:36.596340   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:36.636674   67282 cri.go:89] found id: ""
	I1004 04:25:36.636700   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.636708   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:36.636713   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:36.636764   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:36.675155   67282 cri.go:89] found id: ""
	I1004 04:25:36.675194   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.675206   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:36.675214   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:36.675279   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:36.713458   67282 cri.go:89] found id: ""
	I1004 04:25:36.713485   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.713493   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:36.713498   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:36.713552   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:36.754567   67282 cri.go:89] found id: ""
	I1004 04:25:36.754596   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.754607   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:36.754618   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:36.754631   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:36.824413   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:36.824439   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:36.824453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:36.900438   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:36.900471   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:36.942238   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:36.942264   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:36.992527   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:36.992556   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:35.050546   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:37.548965   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:39.647505   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:42.146720   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:38.623559   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:41.121785   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:43.122437   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:39.506599   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:39.520782   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:39.520854   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:39.561853   67282 cri.go:89] found id: ""
	I1004 04:25:39.561880   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.561891   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:39.561898   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:39.561955   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:39.597548   67282 cri.go:89] found id: ""
	I1004 04:25:39.597581   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.597591   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:39.597598   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:39.597659   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:39.634481   67282 cri.go:89] found id: ""
	I1004 04:25:39.634517   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.634525   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:39.634530   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:39.634575   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:39.677077   67282 cri.go:89] found id: ""
	I1004 04:25:39.677107   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.677117   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:39.677124   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:39.677185   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:39.716334   67282 cri.go:89] found id: ""
	I1004 04:25:39.716356   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.716364   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:39.716369   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:39.716416   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:39.754765   67282 cri.go:89] found id: ""
	I1004 04:25:39.754792   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.754803   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:39.754810   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:39.754863   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:39.788782   67282 cri.go:89] found id: ""
	I1004 04:25:39.788811   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.788824   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:39.788832   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:39.788890   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:39.821946   67282 cri.go:89] found id: ""
	I1004 04:25:39.821970   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.821979   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:39.821988   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:39.822001   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:39.892629   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:39.892657   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:39.892674   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:39.973480   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:39.973515   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:40.018175   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:40.018203   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:40.068585   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:40.068620   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:42.583639   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:42.597249   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:42.597333   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:42.631993   67282 cri.go:89] found id: ""
	I1004 04:25:42.632020   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.632030   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:42.632037   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:42.632091   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:42.669708   67282 cri.go:89] found id: ""
	I1004 04:25:42.669739   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.669749   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:42.669762   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:42.669836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:42.705995   67282 cri.go:89] found id: ""
	I1004 04:25:42.706019   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.706030   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:42.706037   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:42.706094   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:42.740436   67282 cri.go:89] found id: ""
	I1004 04:25:42.740458   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.740466   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:42.740472   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:42.740524   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:42.774516   67282 cri.go:89] found id: ""
	I1004 04:25:42.774546   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.774557   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:42.774564   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:42.774614   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:42.807471   67282 cri.go:89] found id: ""
	I1004 04:25:42.807502   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.807510   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:42.807516   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:42.807561   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:42.851943   67282 cri.go:89] found id: ""
	I1004 04:25:42.851968   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.851977   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:42.851983   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:42.852040   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:42.887762   67282 cri.go:89] found id: ""
	I1004 04:25:42.887801   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.887812   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:42.887822   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:42.887834   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:42.960398   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:42.960423   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:42.960440   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:43.040078   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:43.040117   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:43.081614   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:43.081638   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:43.132744   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:43.132781   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:39.551722   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:42.049418   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:44.049835   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:44.646919   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:47.146884   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:45.622878   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.122299   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:45.647332   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:45.660765   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:45.660834   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:45.696351   67282 cri.go:89] found id: ""
	I1004 04:25:45.696379   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.696390   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:45.696397   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:45.696449   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:45.738529   67282 cri.go:89] found id: ""
	I1004 04:25:45.738553   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.738561   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:45.738566   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:45.738621   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:45.773071   67282 cri.go:89] found id: ""
	I1004 04:25:45.773094   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.773103   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:45.773110   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:45.773165   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:45.810813   67282 cri.go:89] found id: ""
	I1004 04:25:45.810840   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.810852   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:45.810859   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:45.810913   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:45.848916   67282 cri.go:89] found id: ""
	I1004 04:25:45.848942   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.848951   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:45.848956   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:45.849014   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:45.886737   67282 cri.go:89] found id: ""
	I1004 04:25:45.886763   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.886772   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:45.886778   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:45.886825   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:45.922263   67282 cri.go:89] found id: ""
	I1004 04:25:45.922291   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.922301   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:45.922307   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:45.922364   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:45.956688   67282 cri.go:89] found id: ""
	I1004 04:25:45.956710   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.956718   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:45.956725   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:45.956737   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:46.007334   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:46.007365   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:46.020892   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:46.020916   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:46.089786   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:46.089809   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:46.089822   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:46.175987   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:46.176017   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:46.549153   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.549893   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:49.147322   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:51.647365   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:50.622540   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:52.623714   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.718354   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:48.733291   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:48.733347   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:48.769149   67282 cri.go:89] found id: ""
	I1004 04:25:48.769175   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.769185   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:48.769193   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:48.769249   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:48.804386   67282 cri.go:89] found id: ""
	I1004 04:25:48.804410   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.804418   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:48.804423   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:48.804467   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:48.841747   67282 cri.go:89] found id: ""
	I1004 04:25:48.841774   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.841782   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:48.841788   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:48.841836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:48.880025   67282 cri.go:89] found id: ""
	I1004 04:25:48.880048   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.880058   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:48.880064   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:48.880121   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:48.916506   67282 cri.go:89] found id: ""
	I1004 04:25:48.916530   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.916540   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:48.916547   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:48.916607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:48.952082   67282 cri.go:89] found id: ""
	I1004 04:25:48.952105   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.952116   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:48.952122   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:48.952177   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:48.986097   67282 cri.go:89] found id: ""
	I1004 04:25:48.986124   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.986135   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:48.986143   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:48.986210   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:49.020400   67282 cri.go:89] found id: ""
	I1004 04:25:49.020428   67282 logs.go:282] 0 containers: []
	W1004 04:25:49.020436   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:49.020445   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:49.020462   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:49.074724   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:49.074754   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:49.088504   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:49.088529   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:49.165940   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:49.165961   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:49.165972   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:49.244482   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:49.244519   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:51.786086   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:51.800644   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:51.800720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:51.839951   67282 cri.go:89] found id: ""
	I1004 04:25:51.839980   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.839990   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:51.839997   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:51.840055   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:51.878660   67282 cri.go:89] found id: ""
	I1004 04:25:51.878684   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.878695   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:51.878701   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:51.878762   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:51.916640   67282 cri.go:89] found id: ""
	I1004 04:25:51.916665   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.916672   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:51.916678   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:51.916725   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:51.953800   67282 cri.go:89] found id: ""
	I1004 04:25:51.953827   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.953835   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:51.953840   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:51.953897   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:51.993107   67282 cri.go:89] found id: ""
	I1004 04:25:51.993139   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.993150   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:51.993157   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:51.993214   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:52.027426   67282 cri.go:89] found id: ""
	I1004 04:25:52.027454   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.027464   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:52.027470   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:52.027521   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:52.063608   67282 cri.go:89] found id: ""
	I1004 04:25:52.063638   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.063650   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:52.063657   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:52.063717   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:52.100052   67282 cri.go:89] found id: ""
	I1004 04:25:52.100083   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.100094   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:52.100106   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:52.100125   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:52.113801   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:52.113827   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:52.201284   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:52.201311   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:52.201322   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:52.280014   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:52.280047   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:52.318120   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:52.318145   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:51.048719   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:53.050304   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:54.146697   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:56.147015   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:58.148736   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:55.122546   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:57.123051   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:54.872245   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:54.886914   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:54.886990   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:54.927117   67282 cri.go:89] found id: ""
	I1004 04:25:54.927144   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.927152   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:54.927157   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:54.927205   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:54.962510   67282 cri.go:89] found id: ""
	I1004 04:25:54.962540   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.962552   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:54.962559   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:54.962619   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:54.996812   67282 cri.go:89] found id: ""
	I1004 04:25:54.996839   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.996848   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:54.996854   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:54.996905   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:55.034557   67282 cri.go:89] found id: ""
	I1004 04:25:55.034587   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.034597   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:55.034605   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:55.034667   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:55.072383   67282 cri.go:89] found id: ""
	I1004 04:25:55.072416   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.072427   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:55.072434   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:55.072494   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:55.121561   67282 cri.go:89] found id: ""
	I1004 04:25:55.121588   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.121598   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:55.121604   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:55.121775   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:55.165525   67282 cri.go:89] found id: ""
	I1004 04:25:55.165553   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.165564   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:55.165570   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:55.165627   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:55.201808   67282 cri.go:89] found id: ""
	I1004 04:25:55.201836   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.201846   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:55.201857   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:55.201870   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:55.280889   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:55.280917   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:55.280932   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:55.354979   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:55.355012   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:55.397144   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:55.397174   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:55.448710   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:55.448746   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:57.963840   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:57.977027   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:57.977085   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:58.019244   67282 cri.go:89] found id: ""
	I1004 04:25:58.019273   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.019285   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:58.019293   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:58.019351   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:58.057979   67282 cri.go:89] found id: ""
	I1004 04:25:58.058008   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.058018   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:58.058027   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:58.058084   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:58.094607   67282 cri.go:89] found id: ""
	I1004 04:25:58.094639   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.094652   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:58.094658   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:58.094726   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:58.130150   67282 cri.go:89] found id: ""
	I1004 04:25:58.130177   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.130188   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:58.130196   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:58.130259   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:58.167662   67282 cri.go:89] found id: ""
	I1004 04:25:58.167691   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.167701   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:58.167709   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:58.167769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:58.203480   67282 cri.go:89] found id: ""
	I1004 04:25:58.203568   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.203585   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:58.203594   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:58.203662   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:58.239516   67282 cri.go:89] found id: ""
	I1004 04:25:58.239537   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.239545   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:58.239551   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:58.239595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:58.275525   67282 cri.go:89] found id: ""
	I1004 04:25:58.275553   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.275564   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:58.275574   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:58.275587   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:58.331191   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:58.331224   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:58.345629   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:58.345659   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:58.416297   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:58.416315   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:58.416326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:58.490659   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:58.490694   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:55.548913   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:57.549457   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:00.647858   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.146570   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:59.623396   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.624074   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.030058   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:01.044568   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:01.044659   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:01.082652   67282 cri.go:89] found id: ""
	I1004 04:26:01.082679   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.082688   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:01.082694   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:01.082750   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:01.120781   67282 cri.go:89] found id: ""
	I1004 04:26:01.120805   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.120814   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:01.120821   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:01.120878   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:01.159494   67282 cri.go:89] found id: ""
	I1004 04:26:01.159523   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.159531   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:01.159537   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:01.159584   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:01.195482   67282 cri.go:89] found id: ""
	I1004 04:26:01.195512   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.195521   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:01.195529   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:01.195589   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:01.233971   67282 cri.go:89] found id: ""
	I1004 04:26:01.233996   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.234006   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:01.234014   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:01.234076   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:01.275935   67282 cri.go:89] found id: ""
	I1004 04:26:01.275958   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.275966   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:01.275971   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:01.276018   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:01.315512   67282 cri.go:89] found id: ""
	I1004 04:26:01.315535   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.315543   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:01.315548   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:01.315603   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:01.356465   67282 cri.go:89] found id: ""
	I1004 04:26:01.356491   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.356505   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:01.356513   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:01.356523   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:01.409237   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:01.409280   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:01.423426   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:01.423453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:01.501372   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:01.501397   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:01.501413   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:01.591087   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:01.591131   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:59.549485   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.550138   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.550258   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:05.646818   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:07.647322   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.634636   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:06.122840   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:04.152506   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:04.166847   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:04.166911   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:04.203138   67282 cri.go:89] found id: ""
	I1004 04:26:04.203167   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.203177   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:04.203184   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:04.203243   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:04.237427   67282 cri.go:89] found id: ""
	I1004 04:26:04.237453   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.237464   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:04.237471   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:04.237525   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:04.272468   67282 cri.go:89] found id: ""
	I1004 04:26:04.272499   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.272511   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:04.272518   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:04.272584   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:04.307347   67282 cri.go:89] found id: ""
	I1004 04:26:04.307373   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.307384   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:04.307390   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:04.307448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:04.342450   67282 cri.go:89] found id: ""
	I1004 04:26:04.342487   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.342498   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:04.342506   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:04.342568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:04.382846   67282 cri.go:89] found id: ""
	I1004 04:26:04.382874   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.382885   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:04.382893   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:04.382945   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:04.418234   67282 cri.go:89] found id: ""
	I1004 04:26:04.418260   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.418268   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:04.418273   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:04.418328   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:04.453433   67282 cri.go:89] found id: ""
	I1004 04:26:04.453456   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.453464   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:04.453473   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:04.453487   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:04.502093   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:04.502123   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:04.515865   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:04.515897   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:04.595672   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:04.595698   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:04.595713   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:04.675273   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:04.675304   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:07.214965   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:07.229495   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:07.229568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:07.268541   67282 cri.go:89] found id: ""
	I1004 04:26:07.268580   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.268591   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:07.268599   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:07.268662   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:07.321382   67282 cri.go:89] found id: ""
	I1004 04:26:07.321414   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.321424   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:07.321431   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:07.321490   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:07.379840   67282 cri.go:89] found id: ""
	I1004 04:26:07.379869   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.379878   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:07.379884   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:07.379928   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:07.431304   67282 cri.go:89] found id: ""
	I1004 04:26:07.431333   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.431343   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:07.431349   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:07.431407   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:07.466853   67282 cri.go:89] found id: ""
	I1004 04:26:07.466880   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.466888   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:07.466893   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:07.466951   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:07.501587   67282 cri.go:89] found id: ""
	I1004 04:26:07.501613   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.501624   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:07.501630   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:07.501685   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:07.536326   67282 cri.go:89] found id: ""
	I1004 04:26:07.536354   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.536364   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:07.536371   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:07.536426   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:07.575257   67282 cri.go:89] found id: ""
	I1004 04:26:07.575283   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.575292   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:07.575299   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:07.575310   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:07.629477   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:07.629515   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:07.643294   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:07.643326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:07.720324   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:07.720350   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:07.720365   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:07.797641   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:07.797678   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:06.049580   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:08.548786   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.146544   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:12.146842   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:08.622497   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.622759   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:12.624285   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.339392   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:10.353341   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:10.353397   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:10.391023   67282 cri.go:89] found id: ""
	I1004 04:26:10.391049   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.391059   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:10.391066   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:10.391129   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:10.424345   67282 cri.go:89] found id: ""
	I1004 04:26:10.424376   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.424388   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:10.424396   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:10.424466   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:10.459344   67282 cri.go:89] found id: ""
	I1004 04:26:10.459374   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.459387   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:10.459394   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:10.459451   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:10.494898   67282 cri.go:89] found id: ""
	I1004 04:26:10.494921   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.494929   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:10.494935   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:10.494982   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:10.531084   67282 cri.go:89] found id: ""
	I1004 04:26:10.531111   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.531122   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:10.531129   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:10.531185   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:10.566918   67282 cri.go:89] found id: ""
	I1004 04:26:10.566949   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.566960   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:10.566967   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:10.567024   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:10.604888   67282 cri.go:89] found id: ""
	I1004 04:26:10.604923   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.604935   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:10.604942   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:10.605013   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:10.641578   67282 cri.go:89] found id: ""
	I1004 04:26:10.641606   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.641620   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:10.641631   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:10.641648   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:10.696848   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:10.696882   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:10.710393   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:10.710417   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:10.780854   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:10.780881   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:10.780895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:10.861732   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:10.861771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:13.403231   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:13.417246   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:13.417319   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:13.451581   67282 cri.go:89] found id: ""
	I1004 04:26:13.451607   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.451616   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:13.451621   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:13.451681   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:13.488362   67282 cri.go:89] found id: ""
	I1004 04:26:13.488388   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.488396   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:13.488401   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:13.488449   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:10.549905   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:13.048997   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:14.646627   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:16.647879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:15.123067   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:17.622729   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:13.522697   67282 cri.go:89] found id: ""
	I1004 04:26:13.522729   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.522740   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:13.522751   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:13.522803   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:13.564926   67282 cri.go:89] found id: ""
	I1004 04:26:13.564959   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.564972   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:13.564981   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:13.565058   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:13.600582   67282 cri.go:89] found id: ""
	I1004 04:26:13.600612   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.600622   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:13.600630   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:13.600688   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:13.634550   67282 cri.go:89] found id: ""
	I1004 04:26:13.634575   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.634584   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:13.634591   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:13.634646   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:13.669281   67282 cri.go:89] found id: ""
	I1004 04:26:13.669311   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.669320   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:13.669326   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:13.669388   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:13.707664   67282 cri.go:89] found id: ""
	I1004 04:26:13.707693   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.707703   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:13.707713   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:13.707727   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:13.721127   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:13.721168   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:13.788026   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:13.788051   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:13.788067   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:13.864505   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:13.864542   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:13.902896   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:13.902921   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:16.456813   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:16.470071   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:16.470138   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:16.506085   67282 cri.go:89] found id: ""
	I1004 04:26:16.506114   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.506125   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:16.506133   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:16.506189   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:16.540016   67282 cri.go:89] found id: ""
	I1004 04:26:16.540044   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.540052   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:16.540056   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:16.540100   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:16.579247   67282 cri.go:89] found id: ""
	I1004 04:26:16.579272   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.579280   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:16.579285   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:16.579332   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:16.615552   67282 cri.go:89] found id: ""
	I1004 04:26:16.615579   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.615601   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:16.615621   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:16.615675   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:16.652639   67282 cri.go:89] found id: ""
	I1004 04:26:16.652660   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.652671   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:16.652678   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:16.652732   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:16.689607   67282 cri.go:89] found id: ""
	I1004 04:26:16.689631   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.689643   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:16.689650   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:16.689720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:16.724430   67282 cri.go:89] found id: ""
	I1004 04:26:16.724458   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.724469   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:16.724475   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:16.724534   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:16.758378   67282 cri.go:89] found id: ""
	I1004 04:26:16.758412   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.758423   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:16.758434   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:16.758454   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:16.826234   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:16.826259   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:16.826273   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:16.906908   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:16.906945   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:16.950295   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:16.950321   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:17.002216   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:17.002253   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:15.549441   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:17.549816   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.147105   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:21.147403   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.622982   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:21.624073   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.516253   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:19.529664   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:19.529726   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:19.566669   67282 cri.go:89] found id: ""
	I1004 04:26:19.566700   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.566711   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:19.566718   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:19.566772   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:19.605923   67282 cri.go:89] found id: ""
	I1004 04:26:19.605951   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.605961   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:19.605968   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:19.606025   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:19.645132   67282 cri.go:89] found id: ""
	I1004 04:26:19.645158   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.645168   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:19.645175   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:19.645235   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:19.687135   67282 cri.go:89] found id: ""
	I1004 04:26:19.687160   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.687171   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:19.687178   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:19.687256   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:19.724180   67282 cri.go:89] found id: ""
	I1004 04:26:19.724213   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.724224   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:19.724230   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:19.724295   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:19.761608   67282 cri.go:89] found id: ""
	I1004 04:26:19.761638   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.761649   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:19.761656   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:19.761714   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:19.795060   67282 cri.go:89] found id: ""
	I1004 04:26:19.795089   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.795099   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:19.795106   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:19.795164   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:19.835678   67282 cri.go:89] found id: ""
	I1004 04:26:19.835703   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.835712   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:19.835722   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:19.835736   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:19.889508   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:19.889543   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:19.903206   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:19.903233   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:19.973445   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:19.973471   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:19.973485   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:20.053996   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:20.054034   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:22.594171   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:22.609084   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:22.609145   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:22.650423   67282 cri.go:89] found id: ""
	I1004 04:26:22.650449   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.650459   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:22.650466   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:22.650525   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:22.686420   67282 cri.go:89] found id: ""
	I1004 04:26:22.686450   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.686461   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:22.686469   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:22.686535   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:22.721385   67282 cri.go:89] found id: ""
	I1004 04:26:22.721408   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.721416   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:22.721421   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:22.721484   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:22.765461   67282 cri.go:89] found id: ""
	I1004 04:26:22.765492   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.765504   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:22.765511   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:22.765569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:22.798192   67282 cri.go:89] found id: ""
	I1004 04:26:22.798220   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.798230   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:22.798235   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:22.798293   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:22.833110   67282 cri.go:89] found id: ""
	I1004 04:26:22.833138   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.833147   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:22.833153   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:22.833212   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:22.875653   67282 cri.go:89] found id: ""
	I1004 04:26:22.875684   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.875696   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:22.875704   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:22.875766   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:22.913906   67282 cri.go:89] found id: ""
	I1004 04:26:22.913931   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.913938   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:22.913946   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:22.913957   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:22.969480   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:22.969511   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:22.983475   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:22.983500   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:23.059953   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:23.059982   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:23.059996   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:23.139106   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:23.139134   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:19.550307   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:22.048618   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:23.647507   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:26.147135   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:24.122370   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:26.122976   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:25.678489   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:25.692648   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:25.692705   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:25.728232   67282 cri.go:89] found id: ""
	I1004 04:26:25.728261   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.728269   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:25.728276   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:25.728335   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:25.763956   67282 cri.go:89] found id: ""
	I1004 04:26:25.763982   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.763991   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:25.763998   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:25.764057   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:25.799715   67282 cri.go:89] found id: ""
	I1004 04:26:25.799743   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.799753   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:25.799761   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:25.799840   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:25.834823   67282 cri.go:89] found id: ""
	I1004 04:26:25.834855   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.834866   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:25.834873   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:25.834933   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:25.869194   67282 cri.go:89] found id: ""
	I1004 04:26:25.869224   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.869235   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:25.869242   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:25.869303   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:25.903514   67282 cri.go:89] found id: ""
	I1004 04:26:25.903543   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.903553   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:25.903558   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:25.903606   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:25.939887   67282 cri.go:89] found id: ""
	I1004 04:26:25.939919   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.939930   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:25.939938   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:25.939996   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:25.981922   67282 cri.go:89] found id: ""
	I1004 04:26:25.981944   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.981952   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:25.981960   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:25.981971   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:26.064860   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:26.064891   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:26.105272   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:26.105296   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:26.162602   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:26.162640   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:26.176408   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:26.176439   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:26.242264   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:24.549364   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:27.049470   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.646788   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.146205   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:33.146879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.622691   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.122181   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:33.123226   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.742417   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:28.755655   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:28.755723   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:28.789338   67282 cri.go:89] found id: ""
	I1004 04:26:28.789361   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.789369   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:28.789374   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:28.789420   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:28.823513   67282 cri.go:89] found id: ""
	I1004 04:26:28.823544   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.823555   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:28.823562   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:28.823619   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:28.858826   67282 cri.go:89] found id: ""
	I1004 04:26:28.858854   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.858866   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:28.858873   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:28.858927   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:28.892552   67282 cri.go:89] found id: ""
	I1004 04:26:28.892579   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.892587   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:28.892593   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:28.892639   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:28.929250   67282 cri.go:89] found id: ""
	I1004 04:26:28.929277   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.929284   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:28.929289   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:28.929335   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:28.966554   67282 cri.go:89] found id: ""
	I1004 04:26:28.966581   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.966589   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:28.966594   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:28.966642   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:28.999930   67282 cri.go:89] found id: ""
	I1004 04:26:28.999954   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.999964   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:28.999970   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:29.000025   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:29.033687   67282 cri.go:89] found id: ""
	I1004 04:26:29.033717   67282 logs.go:282] 0 containers: []
	W1004 04:26:29.033727   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:29.033737   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:29.033752   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:29.109486   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:29.109523   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:29.149125   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:29.149152   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:29.197830   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:29.197861   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:29.211182   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:29.211204   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:29.276808   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:31.777659   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:31.791374   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:31.791425   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:31.825453   67282 cri.go:89] found id: ""
	I1004 04:26:31.825480   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.825489   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:31.825495   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:31.825553   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:31.857845   67282 cri.go:89] found id: ""
	I1004 04:26:31.857875   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.857884   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:31.857893   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:31.857949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:31.892282   67282 cri.go:89] found id: ""
	I1004 04:26:31.892309   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.892317   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:31.892322   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:31.892366   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:31.926016   67282 cri.go:89] found id: ""
	I1004 04:26:31.926037   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.926045   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:31.926051   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:31.926094   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:31.961382   67282 cri.go:89] found id: ""
	I1004 04:26:31.961415   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.961425   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:31.961433   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:31.961492   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:31.994570   67282 cri.go:89] found id: ""
	I1004 04:26:31.994602   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.994613   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:31.994620   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:31.994675   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:32.027359   67282 cri.go:89] found id: ""
	I1004 04:26:32.027383   67282 logs.go:282] 0 containers: []
	W1004 04:26:32.027391   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:32.027397   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:32.027448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:32.063518   67282 cri.go:89] found id: ""
	I1004 04:26:32.063545   67282 logs.go:282] 0 containers: []
	W1004 04:26:32.063555   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:32.063565   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:32.063577   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:32.151555   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:32.151582   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:32.190678   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:32.190700   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:32.243567   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:32.243596   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:32.256293   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:32.256320   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:32.329513   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:29.548687   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.550184   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:34.050659   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:35.147870   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:37.646571   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:35.623302   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:38.122555   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:34.830126   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:34.844760   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:34.844833   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:34.878409   67282 cri.go:89] found id: ""
	I1004 04:26:34.878433   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.878440   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:34.878445   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:34.878500   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:34.916493   67282 cri.go:89] found id: ""
	I1004 04:26:34.916516   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.916524   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:34.916532   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:34.916577   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:34.954532   67282 cri.go:89] found id: ""
	I1004 04:26:34.954556   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.954565   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:34.954570   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:34.954616   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:34.987163   67282 cri.go:89] found id: ""
	I1004 04:26:34.987190   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.987198   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:34.987205   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:34.987261   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:35.021351   67282 cri.go:89] found id: ""
	I1004 04:26:35.021379   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.021388   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:35.021394   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:35.021452   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:35.056350   67282 cri.go:89] found id: ""
	I1004 04:26:35.056376   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.056384   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:35.056390   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:35.056448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:35.093375   67282 cri.go:89] found id: ""
	I1004 04:26:35.093402   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.093412   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:35.093420   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:35.093486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:35.130509   67282 cri.go:89] found id: ""
	I1004 04:26:35.130532   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.130541   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:35.130549   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:35.130562   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:35.188138   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:35.188174   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:35.202226   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:35.202267   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:35.276652   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:35.276675   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:35.276688   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:35.357339   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:35.357373   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:37.898166   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:37.911319   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:37.911387   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:37.944551   67282 cri.go:89] found id: ""
	I1004 04:26:37.944578   67282 logs.go:282] 0 containers: []
	W1004 04:26:37.944590   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:37.944597   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:37.944652   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:37.978066   67282 cri.go:89] found id: ""
	I1004 04:26:37.978093   67282 logs.go:282] 0 containers: []
	W1004 04:26:37.978101   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:37.978107   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:37.978163   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:38.011065   67282 cri.go:89] found id: ""
	I1004 04:26:38.011095   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.011104   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:38.011109   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:38.011156   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:38.050323   67282 cri.go:89] found id: ""
	I1004 04:26:38.050349   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.050359   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:38.050366   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:38.050425   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:38.089141   67282 cri.go:89] found id: ""
	I1004 04:26:38.089169   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.089177   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:38.089182   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:38.089258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:38.122625   67282 cri.go:89] found id: ""
	I1004 04:26:38.122653   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.122663   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:38.122671   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:38.122719   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:38.159957   67282 cri.go:89] found id: ""
	I1004 04:26:38.159982   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.159990   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:38.159996   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:38.160085   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:38.194592   67282 cri.go:89] found id: ""
	I1004 04:26:38.194618   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.194626   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:38.194646   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:38.194657   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:38.263914   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:38.263945   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:38.263958   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:38.339864   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:38.339895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:38.375477   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:38.375505   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:38.428292   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:38.428320   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:36.050815   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:38.548602   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:39.646794   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.146914   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:40.123280   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.623659   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:40.941910   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:40.955041   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:40.955117   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:40.991278   67282 cri.go:89] found id: ""
	I1004 04:26:40.991307   67282 logs.go:282] 0 containers: []
	W1004 04:26:40.991317   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:40.991325   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:40.991389   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:41.025347   67282 cri.go:89] found id: ""
	I1004 04:26:41.025373   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.025385   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:41.025392   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:41.025450   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:41.060974   67282 cri.go:89] found id: ""
	I1004 04:26:41.061001   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.061019   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:41.061026   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:41.061087   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:41.097557   67282 cri.go:89] found id: ""
	I1004 04:26:41.097587   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.097598   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:41.097605   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:41.097665   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:41.136371   67282 cri.go:89] found id: ""
	I1004 04:26:41.136396   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.136405   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:41.136412   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:41.136472   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:41.172590   67282 cri.go:89] found id: ""
	I1004 04:26:41.172617   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.172627   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:41.172634   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:41.172687   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:41.209124   67282 cri.go:89] found id: ""
	I1004 04:26:41.209146   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.209154   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:41.209159   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:41.209214   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:41.250654   67282 cri.go:89] found id: ""
	I1004 04:26:41.250687   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.250699   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:41.250709   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:41.250723   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:41.305814   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:41.305864   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:41.322961   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:41.322989   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:41.427611   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:41.427632   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:41.427648   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:41.505830   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:41.505877   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:40.549691   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.549838   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:44.647149   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.146894   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:45.122344   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.122706   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:44.050902   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:44.065277   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:44.065343   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:44.101089   67282 cri.go:89] found id: ""
	I1004 04:26:44.101110   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.101117   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:44.101123   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:44.101174   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:44.138570   67282 cri.go:89] found id: ""
	I1004 04:26:44.138593   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.138601   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:44.138606   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:44.138650   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:44.178423   67282 cri.go:89] found id: ""
	I1004 04:26:44.178456   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.178478   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:44.178486   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:44.178556   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:44.213301   67282 cri.go:89] found id: ""
	I1004 04:26:44.213330   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.213338   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:44.213344   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:44.213401   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:44.247653   67282 cri.go:89] found id: ""
	I1004 04:26:44.247681   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.247688   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:44.247694   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:44.247756   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:44.281667   67282 cri.go:89] found id: ""
	I1004 04:26:44.281693   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.281704   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:44.281711   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:44.281767   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:44.314637   67282 cri.go:89] found id: ""
	I1004 04:26:44.314667   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.314677   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:44.314684   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:44.314760   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:44.349432   67282 cri.go:89] found id: ""
	I1004 04:26:44.349459   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.349469   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:44.349479   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:44.349492   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:44.397134   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:44.397168   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:44.410708   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:44.410738   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:44.482025   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:44.482049   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:44.482065   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:44.562652   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:44.562699   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:47.101459   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:47.116923   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:47.117020   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:47.153495   67282 cri.go:89] found id: ""
	I1004 04:26:47.153524   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.153534   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:47.153541   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:47.153601   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:47.189976   67282 cri.go:89] found id: ""
	I1004 04:26:47.190004   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.190014   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:47.190023   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:47.190084   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:47.225712   67282 cri.go:89] found id: ""
	I1004 04:26:47.225740   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.225748   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:47.225754   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:47.225800   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:47.261565   67282 cri.go:89] found id: ""
	I1004 04:26:47.261593   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.261603   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:47.261608   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:47.261665   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:47.298152   67282 cri.go:89] found id: ""
	I1004 04:26:47.298204   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.298214   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:47.298223   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:47.298279   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:47.338226   67282 cri.go:89] found id: ""
	I1004 04:26:47.338253   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.338261   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:47.338267   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:47.338320   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:47.378859   67282 cri.go:89] found id: ""
	I1004 04:26:47.378892   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.378902   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:47.378909   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:47.378964   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:47.418161   67282 cri.go:89] found id: ""
	I1004 04:26:47.418186   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.418194   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:47.418203   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:47.418213   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:47.470271   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:47.470311   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:47.484416   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:47.484453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:47.556744   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:47.556767   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:47.556778   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:47.634266   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:47.634299   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:45.050501   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.550072   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:49.147562   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:51.648504   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:49.623375   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:52.122346   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:50.175746   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:50.191850   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:50.191945   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:50.229542   67282 cri.go:89] found id: ""
	I1004 04:26:50.229574   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.229584   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:50.229593   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:50.229655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:50.268401   67282 cri.go:89] found id: ""
	I1004 04:26:50.268432   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.268441   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:50.268449   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:50.268522   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:50.302927   67282 cri.go:89] found id: ""
	I1004 04:26:50.302954   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.302964   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:50.302969   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:50.303029   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:50.336617   67282 cri.go:89] found id: ""
	I1004 04:26:50.336646   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.336656   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:50.336663   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:50.336724   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:50.372871   67282 cri.go:89] found id: ""
	I1004 04:26:50.372901   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.372911   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:50.372918   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:50.372977   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:50.409601   67282 cri.go:89] found id: ""
	I1004 04:26:50.409629   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.409640   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:50.409648   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:50.409723   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:50.451899   67282 cri.go:89] found id: ""
	I1004 04:26:50.451927   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.451935   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:50.451940   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:50.451991   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:50.487306   67282 cri.go:89] found id: ""
	I1004 04:26:50.487332   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.487343   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:50.487353   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:50.487369   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:50.565167   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:50.565192   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:50.565207   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:50.646155   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:50.646194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:50.688459   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:50.688489   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:50.742416   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:50.742460   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:53.257063   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:53.270546   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:53.270618   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:53.306504   67282 cri.go:89] found id: ""
	I1004 04:26:53.306530   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.306538   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:53.306544   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:53.306594   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:53.343256   67282 cri.go:89] found id: ""
	I1004 04:26:53.343285   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.343293   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:53.343299   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:53.343352   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:53.380834   67282 cri.go:89] found id: ""
	I1004 04:26:53.380864   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.380873   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:53.380880   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:53.380940   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:53.417361   67282 cri.go:89] found id: ""
	I1004 04:26:53.417391   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.417404   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:53.417415   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:53.417479   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:53.451948   67282 cri.go:89] found id: ""
	I1004 04:26:53.451970   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.451978   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:53.451983   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:53.452039   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:53.487731   67282 cri.go:89] found id: ""
	I1004 04:26:53.487756   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.487764   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:53.487769   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:53.487836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:50.049952   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:52.050275   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:54.151420   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.647593   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:54.122386   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.623398   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:53.531549   67282 cri.go:89] found id: ""
	I1004 04:26:53.531573   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.531582   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:53.531587   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:53.531643   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:53.578123   67282 cri.go:89] found id: ""
	I1004 04:26:53.578151   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.578162   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:53.578180   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:53.578195   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:53.643062   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:53.643093   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:53.696157   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:53.696194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:53.709884   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:53.709910   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:53.791272   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:53.791297   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:53.791314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:56.371608   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:56.386293   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:56.386376   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:56.425531   67282 cri.go:89] found id: ""
	I1004 04:26:56.425560   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.425571   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:56.425578   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:56.425646   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:56.470293   67282 cri.go:89] found id: ""
	I1004 04:26:56.470326   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.470335   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:56.470340   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:56.470400   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:56.508927   67282 cri.go:89] found id: ""
	I1004 04:26:56.508955   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.508963   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:56.508968   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:56.509018   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:56.549149   67282 cri.go:89] found id: ""
	I1004 04:26:56.549178   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.549191   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:56.549199   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:56.549270   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:56.589412   67282 cri.go:89] found id: ""
	I1004 04:26:56.589441   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.589451   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:56.589459   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:56.589517   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:56.624732   67282 cri.go:89] found id: ""
	I1004 04:26:56.624760   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.624770   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:56.624776   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:56.624838   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:56.662385   67282 cri.go:89] found id: ""
	I1004 04:26:56.662413   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.662421   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:56.662427   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:56.662483   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:56.697982   67282 cri.go:89] found id: ""
	I1004 04:26:56.698014   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.698025   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:56.698036   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:56.698049   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:56.750597   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:56.750633   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:56.764884   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:56.764921   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:56.844404   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:56.844433   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:56.844451   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:56.924373   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:56.924406   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:54.548706   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.549763   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.049294   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:58.648470   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:01.146948   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.148357   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.123321   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:01.622391   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.466449   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:59.481897   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:59.481972   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:59.535384   67282 cri.go:89] found id: ""
	I1004 04:26:59.535411   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.535422   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:59.535428   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:59.535486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:59.595843   67282 cri.go:89] found id: ""
	I1004 04:26:59.595875   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.595886   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:59.595894   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:59.595954   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:59.641010   67282 cri.go:89] found id: ""
	I1004 04:26:59.641041   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.641049   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:59.641057   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:59.641102   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:59.679705   67282 cri.go:89] found id: ""
	I1004 04:26:59.679736   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.679746   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:59.679753   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:59.679828   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:59.715960   67282 cri.go:89] found id: ""
	I1004 04:26:59.715985   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.715993   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:59.715998   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:59.716047   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:59.757406   67282 cri.go:89] found id: ""
	I1004 04:26:59.757442   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.757453   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:59.757461   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:59.757528   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:59.792038   67282 cri.go:89] found id: ""
	I1004 04:26:59.792066   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.792076   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:59.792083   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:59.792141   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:59.830258   67282 cri.go:89] found id: ""
	I1004 04:26:59.830281   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.830289   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:59.830296   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:59.830308   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:59.877273   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:59.877304   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:59.932570   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:59.932610   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:59.945896   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:59.945919   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:00.020363   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:00.020392   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:00.020412   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:02.601022   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:02.615039   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:02.615112   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:02.654541   67282 cri.go:89] found id: ""
	I1004 04:27:02.654567   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.654574   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:02.654579   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:02.654638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:02.691313   67282 cri.go:89] found id: ""
	I1004 04:27:02.691338   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.691349   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:02.691355   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:02.691414   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:02.735337   67282 cri.go:89] found id: ""
	I1004 04:27:02.735367   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.735376   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:02.735383   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:02.735486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:02.769604   67282 cri.go:89] found id: ""
	I1004 04:27:02.769628   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.769638   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:02.769643   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:02.769704   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:02.812913   67282 cri.go:89] found id: ""
	I1004 04:27:02.812938   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.812949   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:02.812954   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:02.813020   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:02.849910   67282 cri.go:89] found id: ""
	I1004 04:27:02.849939   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.849949   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:02.849956   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:02.850023   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:02.889467   67282 cri.go:89] found id: ""
	I1004 04:27:02.889497   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.889509   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:02.889517   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:02.889575   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:02.928508   67282 cri.go:89] found id: ""
	I1004 04:27:02.928529   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.928537   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:02.928545   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:02.928556   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:02.942783   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:02.942821   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:03.018282   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:03.018304   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:03.018314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:03.101588   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:03.101622   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:03.149911   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:03.149937   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:01.051581   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.550066   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.646200   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:07.648479   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.622932   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.623005   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.121151   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.703125   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:05.717243   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:05.717303   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:05.752564   67282 cri.go:89] found id: ""
	I1004 04:27:05.752588   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.752597   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:05.752609   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:05.752656   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:05.786955   67282 cri.go:89] found id: ""
	I1004 04:27:05.786983   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.786994   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:05.787001   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:05.787073   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:05.823848   67282 cri.go:89] found id: ""
	I1004 04:27:05.823882   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.823893   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:05.823901   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:05.823970   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:05.866192   67282 cri.go:89] found id: ""
	I1004 04:27:05.866220   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.866238   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:05.866246   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:05.866305   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:05.904051   67282 cri.go:89] found id: ""
	I1004 04:27:05.904078   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.904089   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:05.904096   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:05.904154   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:05.940041   67282 cri.go:89] found id: ""
	I1004 04:27:05.940075   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.940085   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:05.940092   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:05.940158   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:05.975758   67282 cri.go:89] found id: ""
	I1004 04:27:05.975799   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.975810   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:05.975818   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:05.975892   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:06.011044   67282 cri.go:89] found id: ""
	I1004 04:27:06.011086   67282 logs.go:282] 0 containers: []
	W1004 04:27:06.011096   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:06.011105   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:06.011116   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:06.024900   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:06.024937   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:06.109932   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:06.109960   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:06.109976   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:06.189517   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:06.189557   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:06.230019   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:06.230048   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:06.050004   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.548768   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:10.147814   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.646430   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:10.122097   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.123967   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.785355   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:08.799156   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:08.799218   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:08.843606   67282 cri.go:89] found id: ""
	I1004 04:27:08.843634   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.843643   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:08.843648   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:08.843698   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:08.884418   67282 cri.go:89] found id: ""
	I1004 04:27:08.884443   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.884450   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:08.884456   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:08.884503   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:08.925878   67282 cri.go:89] found id: ""
	I1004 04:27:08.925906   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.925914   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:08.925920   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:08.925970   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:08.966127   67282 cri.go:89] found id: ""
	I1004 04:27:08.966157   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.966167   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:08.966173   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:08.966227   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:09.010646   67282 cri.go:89] found id: ""
	I1004 04:27:09.010672   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.010682   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:09.010702   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:09.010769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:09.049738   67282 cri.go:89] found id: ""
	I1004 04:27:09.049761   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.049768   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:09.049774   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:09.049825   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:09.082709   67282 cri.go:89] found id: ""
	I1004 04:27:09.082739   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.082747   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:09.082752   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:09.082808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:09.120574   67282 cri.go:89] found id: ""
	I1004 04:27:09.120605   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.120617   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:09.120626   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:09.120636   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:09.202880   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:09.202922   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:09.242668   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:09.242700   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:09.298662   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:09.298703   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:09.314832   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:09.314868   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:09.389062   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:11.889645   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:11.902953   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:11.903012   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:11.939846   67282 cri.go:89] found id: ""
	I1004 04:27:11.939874   67282 logs.go:282] 0 containers: []
	W1004 04:27:11.939882   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:11.939888   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:11.939936   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:11.975281   67282 cri.go:89] found id: ""
	I1004 04:27:11.975303   67282 logs.go:282] 0 containers: []
	W1004 04:27:11.975311   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:11.975317   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:11.975370   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:12.011400   67282 cri.go:89] found id: ""
	I1004 04:27:12.011428   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.011438   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:12.011443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:12.011506   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:12.046862   67282 cri.go:89] found id: ""
	I1004 04:27:12.046889   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.046898   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:12.046905   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:12.046960   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:12.081537   67282 cri.go:89] found id: ""
	I1004 04:27:12.081569   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.081581   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:12.081590   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:12.081655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:12.121982   67282 cri.go:89] found id: ""
	I1004 04:27:12.122010   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.122021   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:12.122028   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:12.122086   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:12.161419   67282 cri.go:89] found id: ""
	I1004 04:27:12.161460   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.161473   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:12.161481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:12.161549   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:12.202188   67282 cri.go:89] found id: ""
	I1004 04:27:12.202230   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.202242   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:12.202253   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:12.202267   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:12.253424   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:12.253462   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:12.268116   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:12.268141   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:12.337788   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:12.337814   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:12.337826   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:12.417359   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:12.417395   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:10.549097   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.549239   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.647267   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:17.147526   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.623050   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:16.623702   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.959596   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:14.973031   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:14.973090   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:15.011451   67282 cri.go:89] found id: ""
	I1004 04:27:15.011487   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.011497   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:15.011513   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:15.011572   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:15.055767   67282 cri.go:89] found id: ""
	I1004 04:27:15.055817   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.055829   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:15.055836   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:15.055915   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:15.096357   67282 cri.go:89] found id: ""
	I1004 04:27:15.096385   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.096394   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:15.096399   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:15.096456   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:15.131824   67282 cri.go:89] found id: ""
	I1004 04:27:15.131853   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.131863   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:15.131870   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:15.131932   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:15.169250   67282 cri.go:89] found id: ""
	I1004 04:27:15.169285   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.169299   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:15.169307   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:15.169373   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:15.206852   67282 cri.go:89] found id: ""
	I1004 04:27:15.206881   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.206889   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:15.206895   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:15.206949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:15.241392   67282 cri.go:89] found id: ""
	I1004 04:27:15.241421   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.241431   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:15.241439   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:15.241498   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:15.280697   67282 cri.go:89] found id: ""
	I1004 04:27:15.280723   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.280734   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:15.280744   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:15.280758   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:15.361681   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:15.361716   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:15.404640   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:15.404676   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:15.457287   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:15.457326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:15.471162   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:15.471188   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:15.544157   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:18.045094   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:18.060228   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:18.060310   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:18.096659   67282 cri.go:89] found id: ""
	I1004 04:27:18.096688   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.096697   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:18.096703   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:18.096757   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:18.135538   67282 cri.go:89] found id: ""
	I1004 04:27:18.135565   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.135573   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:18.135579   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:18.135629   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:18.171051   67282 cri.go:89] found id: ""
	I1004 04:27:18.171082   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.171098   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:18.171106   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:18.171168   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:18.205696   67282 cri.go:89] found id: ""
	I1004 04:27:18.205725   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.205735   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:18.205742   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:18.205803   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:18.240545   67282 cri.go:89] found id: ""
	I1004 04:27:18.240566   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.240576   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:18.240584   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:18.240638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:18.279185   67282 cri.go:89] found id: ""
	I1004 04:27:18.279221   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.279232   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:18.279239   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:18.279310   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:18.318395   67282 cri.go:89] found id: ""
	I1004 04:27:18.318417   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.318424   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:18.318430   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:18.318476   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:18.352367   67282 cri.go:89] found id: ""
	I1004 04:27:18.352390   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.352398   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:18.352407   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:18.352420   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:18.365604   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:18.365637   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:18.438407   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:18.438427   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:18.438438   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:14.549690   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:16.550244   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:18.550355   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:19.647031   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:22.147826   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:19.126090   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:21.623910   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:18.513645   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:18.513679   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:18.557224   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:18.557250   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:21.111005   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:21.126573   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:21.126631   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:21.161161   67282 cri.go:89] found id: ""
	I1004 04:27:21.161190   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.161201   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:21.161207   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:21.161258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:21.199517   67282 cri.go:89] found id: ""
	I1004 04:27:21.199544   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.199555   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:21.199562   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:21.199625   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:21.236210   67282 cri.go:89] found id: ""
	I1004 04:27:21.236238   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.236246   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:21.236251   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:21.236311   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:21.272720   67282 cri.go:89] found id: ""
	I1004 04:27:21.272746   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.272753   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:21.272759   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:21.272808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:21.311439   67282 cri.go:89] found id: ""
	I1004 04:27:21.311474   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.311484   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:21.311491   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:21.311551   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:21.360400   67282 cri.go:89] found id: ""
	I1004 04:27:21.360427   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.360436   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:21.360443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:21.360511   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:21.394627   67282 cri.go:89] found id: ""
	I1004 04:27:21.394656   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.394667   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:21.394673   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:21.394721   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:21.429736   67282 cri.go:89] found id: ""
	I1004 04:27:21.429762   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.429770   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:21.429778   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:21.429789   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:21.482773   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:21.482808   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:21.497570   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:21.497595   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:21.582335   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:21.582355   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:21.582367   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:21.662196   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:21.662230   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:21.050000   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:23.050516   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.647074   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:27.147999   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.123142   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:26.624049   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.205743   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:24.222878   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:24.222951   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:24.263410   67282 cri.go:89] found id: ""
	I1004 04:27:24.263450   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.263462   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:24.263469   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:24.263532   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:24.306892   67282 cri.go:89] found id: ""
	I1004 04:27:24.306923   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.306934   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:24.306941   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:24.307008   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:24.345522   67282 cri.go:89] found id: ""
	I1004 04:27:24.345559   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.345571   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:24.345579   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:24.345638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:24.384893   67282 cri.go:89] found id: ""
	I1004 04:27:24.384918   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.384925   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:24.384931   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:24.384978   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:24.420998   67282 cri.go:89] found id: ""
	I1004 04:27:24.421025   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.421036   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:24.421043   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:24.421105   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:24.456277   67282 cri.go:89] found id: ""
	I1004 04:27:24.456305   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.456315   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:24.456322   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:24.456383   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:24.497852   67282 cri.go:89] found id: ""
	I1004 04:27:24.497881   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.497892   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:24.497900   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:24.497960   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:24.538702   67282 cri.go:89] found id: ""
	I1004 04:27:24.538736   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.538755   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:24.538766   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:24.538778   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:24.553747   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:24.553773   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:24.638059   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:24.638081   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:24.638093   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:24.718165   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:24.718212   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:24.759770   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:24.759811   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:27.311684   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:27.327493   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:27.327570   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:27.362804   67282 cri.go:89] found id: ""
	I1004 04:27:27.362827   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.362836   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:27.362841   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:27.362888   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:27.401576   67282 cri.go:89] found id: ""
	I1004 04:27:27.401604   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.401614   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:27.401621   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:27.401682   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:27.445152   67282 cri.go:89] found id: ""
	I1004 04:27:27.445177   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.445187   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:27.445193   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:27.445240   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:27.482710   67282 cri.go:89] found id: ""
	I1004 04:27:27.482734   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.482742   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:27.482749   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:27.482808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:27.519459   67282 cri.go:89] found id: ""
	I1004 04:27:27.519488   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.519498   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:27.519505   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:27.519569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:27.559381   67282 cri.go:89] found id: ""
	I1004 04:27:27.559407   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.559417   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:27.559423   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:27.559468   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:27.609040   67282 cri.go:89] found id: ""
	I1004 04:27:27.609068   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.609076   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:27.609081   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:27.609128   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:27.654537   67282 cri.go:89] found id: ""
	I1004 04:27:27.654569   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.654579   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:27.654590   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:27.654603   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:27.709062   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:27.709098   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:27.722931   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:27.722955   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:27.796863   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:27.796884   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:27.796895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:27.879840   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:27.879876   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:25.549643   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:27.551373   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:29.646879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:31.646956   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:29.122087   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:31.122774   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:30.423644   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:30.439256   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:30.439311   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:30.479612   67282 cri.go:89] found id: ""
	I1004 04:27:30.479640   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.479648   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:30.479654   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:30.479750   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:30.522846   67282 cri.go:89] found id: ""
	I1004 04:27:30.522879   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.522890   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:30.522898   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:30.522946   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:30.558935   67282 cri.go:89] found id: ""
	I1004 04:27:30.558962   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.558971   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:30.558976   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:30.559032   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:30.603383   67282 cri.go:89] found id: ""
	I1004 04:27:30.603411   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.603421   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:30.603428   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:30.603492   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:30.644700   67282 cri.go:89] found id: ""
	I1004 04:27:30.644727   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.644737   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:30.644744   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:30.644799   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:30.680328   67282 cri.go:89] found id: ""
	I1004 04:27:30.680358   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.680367   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:30.680372   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:30.680419   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:30.717973   67282 cri.go:89] found id: ""
	I1004 04:27:30.717995   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.718005   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:30.718021   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:30.718082   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:30.755838   67282 cri.go:89] found id: ""
	I1004 04:27:30.755866   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.755874   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:30.755882   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:30.755893   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:30.809999   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:30.810036   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:30.824447   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:30.824491   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:30.902008   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:30.902030   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:30.902043   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:30.986938   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:30.986984   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:30.049983   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:32.050033   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:34.050671   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.647707   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:36.146619   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.624575   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:36.122046   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.531108   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:33.546681   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:33.546759   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:33.586444   67282 cri.go:89] found id: ""
	I1004 04:27:33.586469   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.586479   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:33.586486   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:33.586552   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:33.629340   67282 cri.go:89] found id: ""
	I1004 04:27:33.629365   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.629373   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:33.629378   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:33.629429   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:33.668446   67282 cri.go:89] found id: ""
	I1004 04:27:33.668473   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.668483   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:33.668490   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:33.668548   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:33.706287   67282 cri.go:89] found id: ""
	I1004 04:27:33.706312   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.706320   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:33.706327   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:33.706385   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:33.746161   67282 cri.go:89] found id: ""
	I1004 04:27:33.746189   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.746200   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:33.746207   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:33.746270   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:33.782157   67282 cri.go:89] found id: ""
	I1004 04:27:33.782184   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.782194   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:33.782200   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:33.782262   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:33.820332   67282 cri.go:89] found id: ""
	I1004 04:27:33.820361   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.820371   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:33.820378   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:33.820437   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:33.859431   67282 cri.go:89] found id: ""
	I1004 04:27:33.859458   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.859467   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:33.859475   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:33.859485   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:33.910259   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:33.910292   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:33.925149   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:33.925177   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:34.006153   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:34.006187   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:34.006202   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:34.115882   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:34.115916   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:36.662964   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:36.677071   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:36.677139   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:36.720785   67282 cri.go:89] found id: ""
	I1004 04:27:36.720807   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.720818   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:36.720826   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:36.720875   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:36.757535   67282 cri.go:89] found id: ""
	I1004 04:27:36.757563   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.757574   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:36.757582   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:36.757630   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:36.800989   67282 cri.go:89] found id: ""
	I1004 04:27:36.801024   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.801038   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:36.801046   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:36.801112   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:36.837101   67282 cri.go:89] found id: ""
	I1004 04:27:36.837122   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.837131   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:36.837136   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:36.837181   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:36.876325   67282 cri.go:89] found id: ""
	I1004 04:27:36.876358   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.876370   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:36.876379   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:36.876444   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:36.914720   67282 cri.go:89] found id: ""
	I1004 04:27:36.914749   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.914759   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:36.914767   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:36.914828   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:36.949672   67282 cri.go:89] found id: ""
	I1004 04:27:36.949694   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.949701   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:36.949706   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:36.949754   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:36.983374   67282 cri.go:89] found id: ""
	I1004 04:27:36.983406   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.983416   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:36.983427   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:36.983440   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:37.039040   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:37.039075   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:37.054873   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:37.054898   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:37.131537   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:37.131562   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:37.131577   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:37.213958   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:37.213990   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:36.548751   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:39.049804   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:38.646028   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:40.646213   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:42.648505   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:38.623560   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:40.623721   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:43.122033   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:39.754264   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:39.771465   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:39.771545   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:39.829530   67282 cri.go:89] found id: ""
	I1004 04:27:39.829560   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.829572   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:39.829580   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:39.829639   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:39.876055   67282 cri.go:89] found id: ""
	I1004 04:27:39.876078   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.876090   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:39.876095   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:39.876142   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:39.913304   67282 cri.go:89] found id: ""
	I1004 04:27:39.913327   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.913335   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:39.913340   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:39.913389   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:39.948821   67282 cri.go:89] found id: ""
	I1004 04:27:39.948847   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.948855   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:39.948862   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:39.948916   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:39.986994   67282 cri.go:89] found id: ""
	I1004 04:27:39.987023   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.987034   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:39.987041   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:39.987141   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:40.026627   67282 cri.go:89] found id: ""
	I1004 04:27:40.026656   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.026668   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:40.026675   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:40.026734   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:40.067028   67282 cri.go:89] found id: ""
	I1004 04:27:40.067068   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.067079   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:40.067086   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:40.067144   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:40.105638   67282 cri.go:89] found id: ""
	I1004 04:27:40.105667   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.105677   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:40.105694   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:40.105707   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:40.159425   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:40.159467   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:40.175045   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:40.175073   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:40.261967   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:40.261989   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:40.262002   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:40.345317   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:40.345354   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:42.888115   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:42.901889   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:42.901948   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:42.938556   67282 cri.go:89] found id: ""
	I1004 04:27:42.938587   67282 logs.go:282] 0 containers: []
	W1004 04:27:42.938597   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:42.938604   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:42.938668   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:42.974569   67282 cri.go:89] found id: ""
	I1004 04:27:42.974595   67282 logs.go:282] 0 containers: []
	W1004 04:27:42.974606   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:42.974613   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:42.974679   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:43.010552   67282 cri.go:89] found id: ""
	I1004 04:27:43.010581   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.010593   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:43.010600   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:43.010655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:43.046204   67282 cri.go:89] found id: ""
	I1004 04:27:43.046237   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.046247   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:43.046254   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:43.046313   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:43.081612   67282 cri.go:89] found id: ""
	I1004 04:27:43.081644   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.081655   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:43.081662   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:43.081729   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:43.121103   67282 cri.go:89] found id: ""
	I1004 04:27:43.121126   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.121133   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:43.121139   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:43.121191   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:43.157104   67282 cri.go:89] found id: ""
	I1004 04:27:43.157128   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.157136   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:43.157141   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:43.157196   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:43.198927   67282 cri.go:89] found id: ""
	I1004 04:27:43.198951   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.198958   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:43.198966   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:43.198975   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:43.254534   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:43.254563   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:43.268106   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:43.268130   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:43.344382   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:43.344410   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:43.344425   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:43.426916   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:43.426948   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:41.549364   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:43.549590   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.146452   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:47.148300   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.126135   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:47.622568   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.966806   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:45.980187   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:45.980252   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:46.014196   67282 cri.go:89] found id: ""
	I1004 04:27:46.014220   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.014228   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:46.014233   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:46.014295   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:46.053910   67282 cri.go:89] found id: ""
	I1004 04:27:46.053940   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.053951   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:46.053957   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:46.054013   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:46.087896   67282 cri.go:89] found id: ""
	I1004 04:27:46.087921   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.087930   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:46.087936   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:46.087985   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:46.123441   67282 cri.go:89] found id: ""
	I1004 04:27:46.123465   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.123475   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:46.123481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:46.123545   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:46.159664   67282 cri.go:89] found id: ""
	I1004 04:27:46.159688   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.159698   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:46.159704   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:46.159761   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:46.195474   67282 cri.go:89] found id: ""
	I1004 04:27:46.195501   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.195512   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:46.195525   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:46.195569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:46.228670   67282 cri.go:89] found id: ""
	I1004 04:27:46.228693   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.228701   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:46.228706   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:46.228759   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:46.265278   67282 cri.go:89] found id: ""
	I1004 04:27:46.265303   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.265311   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:46.265325   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:46.265338   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:46.315135   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:46.315163   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:46.327765   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:46.327797   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:46.393157   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:46.393173   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:46.393184   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:46.473026   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:46.473058   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:46.049285   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:48.549053   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:49.647027   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:52.146841   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:50.122921   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:52.622913   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:49.011972   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:49.025718   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:49.025783   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:49.062749   67282 cri.go:89] found id: ""
	I1004 04:27:49.062774   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.062782   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:49.062788   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:49.062844   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:49.100838   67282 cri.go:89] found id: ""
	I1004 04:27:49.100886   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.100897   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:49.100904   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:49.100961   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:49.139966   67282 cri.go:89] found id: ""
	I1004 04:27:49.139990   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.140000   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:49.140007   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:49.140088   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:49.179347   67282 cri.go:89] found id: ""
	I1004 04:27:49.179373   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.179384   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:49.179391   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:49.179435   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:49.218086   67282 cri.go:89] found id: ""
	I1004 04:27:49.218112   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.218121   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:49.218127   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:49.218181   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:49.254779   67282 cri.go:89] found id: ""
	I1004 04:27:49.254811   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.254823   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:49.254830   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:49.254888   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:49.287351   67282 cri.go:89] found id: ""
	I1004 04:27:49.287381   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.287392   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:49.287398   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:49.287456   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:49.320051   67282 cri.go:89] found id: ""
	I1004 04:27:49.320078   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.320089   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:49.320100   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:49.320112   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:49.371270   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:49.371300   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:49.384403   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:49.384432   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:49.468132   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:49.468154   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:49.468167   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:49.543179   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:49.543211   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:52.093235   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:52.108446   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:52.108520   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:52.147590   67282 cri.go:89] found id: ""
	I1004 04:27:52.147613   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.147620   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:52.147626   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:52.147677   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:52.183066   67282 cri.go:89] found id: ""
	I1004 04:27:52.183095   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.183105   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:52.183112   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:52.183170   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:52.223109   67282 cri.go:89] found id: ""
	I1004 04:27:52.223140   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.223154   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:52.223165   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:52.223223   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:52.259547   67282 cri.go:89] found id: ""
	I1004 04:27:52.259573   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.259582   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:52.259587   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:52.259638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:52.296934   67282 cri.go:89] found id: ""
	I1004 04:27:52.296961   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.296971   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:52.296979   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:52.297040   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:52.331650   67282 cri.go:89] found id: ""
	I1004 04:27:52.331671   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.331679   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:52.331684   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:52.331728   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:52.365111   67282 cri.go:89] found id: ""
	I1004 04:27:52.365139   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.365150   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:52.365157   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:52.365239   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:52.400974   67282 cri.go:89] found id: ""
	I1004 04:27:52.401010   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.401023   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:52.401035   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:52.401049   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:52.484732   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:52.484771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:52.523322   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:52.523348   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:52.576671   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:52.576702   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:52.590263   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:52.590291   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:52.666646   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:50.549475   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:53.049259   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:54.646262   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.153196   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:55.123174   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.123932   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:55.166856   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:55.181481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:55.181562   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:55.218023   67282 cri.go:89] found id: ""
	I1004 04:27:55.218048   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.218056   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:55.218063   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:55.218121   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:55.256439   67282 cri.go:89] found id: ""
	I1004 04:27:55.256464   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.256472   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:55.256477   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:55.256531   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:55.294563   67282 cri.go:89] found id: ""
	I1004 04:27:55.294588   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.294596   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:55.294601   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:55.294656   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:55.331266   67282 cri.go:89] found id: ""
	I1004 04:27:55.331290   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.331300   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:55.331306   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:55.331370   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:55.367286   67282 cri.go:89] found id: ""
	I1004 04:27:55.367314   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.367325   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:55.367332   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:55.367391   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:55.402031   67282 cri.go:89] found id: ""
	I1004 04:27:55.402054   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.402062   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:55.402068   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:55.402122   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:55.437737   67282 cri.go:89] found id: ""
	I1004 04:27:55.437764   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.437774   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:55.437780   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:55.437842   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:55.470654   67282 cri.go:89] found id: ""
	I1004 04:27:55.470692   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.470704   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:55.470713   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:55.470726   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:55.521364   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:55.521393   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:55.534691   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:55.534716   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:55.600902   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:55.600923   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:55.600933   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:55.678896   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:55.678940   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:58.220086   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:58.234049   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:58.234110   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:58.281112   67282 cri.go:89] found id: ""
	I1004 04:27:58.281135   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.281143   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:58.281148   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:58.281191   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:58.320549   67282 cri.go:89] found id: ""
	I1004 04:27:58.320575   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.320584   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:58.320589   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:58.320635   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:58.355139   67282 cri.go:89] found id: ""
	I1004 04:27:58.355166   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.355174   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:58.355179   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:58.355225   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:58.387809   67282 cri.go:89] found id: ""
	I1004 04:27:58.387836   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.387846   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:58.387851   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:58.387908   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:58.420264   67282 cri.go:89] found id: ""
	I1004 04:27:58.420287   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.420295   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:58.420300   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:58.420349   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:58.455409   67282 cri.go:89] found id: ""
	I1004 04:27:58.455431   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.455438   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:58.455443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:58.455487   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:58.488708   67282 cri.go:89] found id: ""
	I1004 04:27:58.488734   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.488742   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:58.488749   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:58.488797   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:55.051622   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.548584   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:59.646699   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:01.648277   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:59.623008   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:02.122303   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:58.522139   67282 cri.go:89] found id: ""
	I1004 04:27:58.522161   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.522169   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:58.522176   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:58.522187   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:58.604653   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:58.604683   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:58.645141   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:58.645169   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:58.699716   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:58.699748   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:58.713197   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:58.713228   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:58.781998   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:01.282429   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:01.297266   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:01.297343   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:01.330421   67282 cri.go:89] found id: ""
	I1004 04:28:01.330446   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.330454   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:28:01.330459   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:01.330514   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:01.366960   67282 cri.go:89] found id: ""
	I1004 04:28:01.366983   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.366992   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:28:01.366998   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:01.367067   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:01.400886   67282 cri.go:89] found id: ""
	I1004 04:28:01.400910   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.400920   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:28:01.400931   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:01.400987   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:01.435556   67282 cri.go:89] found id: ""
	I1004 04:28:01.435586   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.435594   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:28:01.435601   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:01.435649   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:01.475772   67282 cri.go:89] found id: ""
	I1004 04:28:01.475810   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.475820   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:28:01.475826   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:01.475884   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:01.512380   67282 cri.go:89] found id: ""
	I1004 04:28:01.512403   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.512411   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:28:01.512417   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:01.512465   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:01.550488   67282 cri.go:89] found id: ""
	I1004 04:28:01.550517   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.550528   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:01.550536   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:28:01.550595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:28:01.586216   67282 cri.go:89] found id: ""
	I1004 04:28:01.586249   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.586261   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:28:01.586271   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:01.586285   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:01.640819   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:01.640860   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:01.656990   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:01.657020   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:28:01.731326   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:01.731354   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:01.731368   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:01.810007   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:28:01.810044   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:59.548748   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:01.551035   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.043116   66755 pod_ready.go:82] duration metric: took 4m0.000354814s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" ...
	E1004 04:28:04.043143   66755 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" (will not retry!)
	I1004 04:28:04.043167   66755 pod_ready.go:39] duration metric: took 4m15.403862245s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:04.043219   66755 kubeadm.go:597] duration metric: took 4m23.226496183s to restartPrimaryControlPlane
	W1004 04:28:04.043288   66755 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1004 04:28:04.043316   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:28:04.146697   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:06.147038   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:08.147201   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.122463   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:06.622379   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.352648   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:04.366150   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:04.366227   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:04.403272   67282 cri.go:89] found id: ""
	I1004 04:28:04.403298   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.403308   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:28:04.403315   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:04.403371   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:04.439237   67282 cri.go:89] found id: ""
	I1004 04:28:04.439269   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.439280   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:28:04.439287   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:04.439345   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:04.475532   67282 cri.go:89] found id: ""
	I1004 04:28:04.475558   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.475569   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:28:04.475576   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:04.475638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:04.511738   67282 cri.go:89] found id: ""
	I1004 04:28:04.511765   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.511775   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:28:04.511792   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:04.511850   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:04.553536   67282 cri.go:89] found id: ""
	I1004 04:28:04.553561   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.553568   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:28:04.553574   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:04.553625   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:04.589016   67282 cri.go:89] found id: ""
	I1004 04:28:04.589044   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.589053   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:28:04.589058   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:04.589106   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:04.622780   67282 cri.go:89] found id: ""
	I1004 04:28:04.622808   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.622817   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:04.622823   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:28:04.622879   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:28:04.662620   67282 cri.go:89] found id: ""
	I1004 04:28:04.662641   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.662649   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:28:04.662659   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:04.662669   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:04.717894   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:04.717928   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:04.732353   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:04.732385   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:28:04.806443   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:04.806469   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:04.806492   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:04.887684   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:28:04.887717   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:07.426630   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:07.440242   67282 kubeadm.go:597] duration metric: took 4m3.475062199s to restartPrimaryControlPlane
	W1004 04:28:07.440318   67282 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1004 04:28:07.440346   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:28:08.147532   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:08.162175   67282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:28:08.172013   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:28:08.181741   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:28:08.181757   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:28:08.181801   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:28:08.191002   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:28:08.191046   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:28:08.200929   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:28:08.210241   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:28:08.210286   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:28:08.219693   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:28:08.229497   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:28:08.229534   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:28:08.239583   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:28:08.249207   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:28:08.249252   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:28:08.258516   67282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:28:08.328054   67282 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:28:08.328132   67282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:28:08.472265   67282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:28:08.472420   67282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:28:08.472543   67282 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:28:08.655873   67282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:28:08.657726   67282 out.go:235]   - Generating certificates and keys ...
	I1004 04:28:08.657817   67282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:28:08.657876   67282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:28:08.657942   67282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:28:08.658034   67282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:28:08.658149   67282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:28:08.658235   67282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:28:08.658309   67282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:28:08.658396   67282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:28:08.658503   67282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:28:08.658600   67282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:28:08.658651   67282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:28:08.658707   67282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:28:08.706486   67282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:28:08.909036   67282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:28:09.285968   67282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:28:09.499963   67282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:28:09.516914   67282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:28:09.517832   67282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:28:09.517900   67282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:28:09.664925   67282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:28:10.147391   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:12.646012   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:09.121686   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:11.123086   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:13.123578   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:09.666691   67282 out.go:235]   - Booting up control plane ...
	I1004 04:28:09.666889   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:28:09.671298   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:28:09.672046   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:28:09.672956   67282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:28:09.685069   67282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:28:14.646614   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:16.646683   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:15.125374   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:17.125685   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:18.646777   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:21.147299   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:19.623872   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:22.123077   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:23.646460   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:25.647096   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:28.147324   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:24.623730   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:27.123516   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:30.379460   66755 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.336110507s)
	I1004 04:28:30.379544   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:30.395622   66755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:28:30.406790   66755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:28:30.417380   66755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:28:30.417408   66755 kubeadm.go:157] found existing configuration files:
	
	I1004 04:28:30.417458   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:28:30.427925   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:28:30.427993   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:28:30.438694   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:28:30.448898   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:28:30.448972   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:28:30.459463   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:28:30.469227   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:28:30.469281   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:28:30.479979   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:28:30.489873   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:28:30.489936   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:28:30.499999   66755 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:28:30.549707   66755 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 04:28:30.549771   66755 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:28:30.663468   66755 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:28:30.663595   66755 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:28:30.663698   66755 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 04:28:30.675750   66755 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:28:30.677655   66755 out.go:235]   - Generating certificates and keys ...
	I1004 04:28:30.677760   66755 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:28:30.677868   66755 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:28:30.678010   66755 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:28:30.678102   66755 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:28:30.678217   66755 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:28:30.678289   66755 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:28:30.678378   66755 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:28:30.678470   66755 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:28:30.678566   66755 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:28:30.678732   66755 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:28:30.679295   66755 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:28:30.679383   66755 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:28:30.826979   66755 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:28:30.900919   66755 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 04:28:31.098221   66755 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:28:31.243668   66755 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:28:31.411766   66755 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:28:31.412181   66755 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:28:31.414652   66755 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:28:30.646927   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:32.647767   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:29.129148   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:31.623284   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:31.416504   66755 out.go:235]   - Booting up control plane ...
	I1004 04:28:31.416620   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:28:31.416730   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:28:31.418284   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:28:31.437379   66755 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:28:31.443450   66755 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:28:31.443505   66755 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:28:31.586540   66755 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 04:28:31.586706   66755 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 04:28:32.088382   66755 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.195244ms
	I1004 04:28:32.088510   66755 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 04:28:37.090291   66755 kubeadm.go:310] [api-check] The API server is healthy after 5.001756025s
	I1004 04:28:37.103845   66755 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 04:28:37.127230   66755 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 04:28:37.156917   66755 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 04:28:37.157181   66755 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-934812 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 04:28:37.171399   66755 kubeadm.go:310] [bootstrap-token] Using token: 1wt5ey.lvccf2aeyngf9mt3
	I1004 04:28:34.648249   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:37.148680   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:33.623901   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:36.122762   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:38.123147   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:37.172939   66755 out.go:235]   - Configuring RBAC rules ...
	I1004 04:28:37.173086   66755 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 04:28:37.179454   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 04:28:37.188765   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 04:28:37.192599   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 04:28:37.200359   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 04:28:37.204872   66755 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 04:28:37.498753   66755 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 04:28:37.931621   66755 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 04:28:38.497855   66755 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 04:28:38.498949   66755 kubeadm.go:310] 
	I1004 04:28:38.499023   66755 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 04:28:38.499055   66755 kubeadm.go:310] 
	I1004 04:28:38.499183   66755 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 04:28:38.499195   66755 kubeadm.go:310] 
	I1004 04:28:38.499229   66755 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 04:28:38.499316   66755 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 04:28:38.499385   66755 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 04:28:38.499393   66755 kubeadm.go:310] 
	I1004 04:28:38.499481   66755 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 04:28:38.499498   66755 kubeadm.go:310] 
	I1004 04:28:38.499563   66755 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 04:28:38.499571   66755 kubeadm.go:310] 
	I1004 04:28:38.499653   66755 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 04:28:38.499742   66755 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 04:28:38.499871   66755 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 04:28:38.499888   66755 kubeadm.go:310] 
	I1004 04:28:38.499994   66755 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 04:28:38.500104   66755 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 04:28:38.500115   66755 kubeadm.go:310] 
	I1004 04:28:38.500220   66755 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1wt5ey.lvccf2aeyngf9mt3 \
	I1004 04:28:38.500350   66755 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 \
	I1004 04:28:38.500387   66755 kubeadm.go:310] 	--control-plane 
	I1004 04:28:38.500402   66755 kubeadm.go:310] 
	I1004 04:28:38.500478   66755 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 04:28:38.500484   66755 kubeadm.go:310] 
	I1004 04:28:38.500563   66755 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1wt5ey.lvccf2aeyngf9mt3 \
	I1004 04:28:38.500686   66755 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 
	I1004 04:28:38.501820   66755 kubeadm.go:310] W1004 04:28:30.522396    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:28:38.502147   66755 kubeadm.go:310] W1004 04:28:30.524006    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:28:38.502282   66755 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:28:38.502311   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:28:38.502321   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:28:38.504185   66755 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:28:38.505600   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:28:38.518746   66755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:28:38.541311   66755 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:28:38.541422   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:38.541460   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-934812 minikube.k8s.io/updated_at=2024_10_04T04_28_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=embed-certs-934812 minikube.k8s.io/primary=true
	I1004 04:28:38.605537   66755 ops.go:34] apiserver oom_adj: -16
	I1004 04:28:38.765084   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:39.646916   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:41.651456   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:39.265365   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:39.765925   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:40.265135   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:40.766204   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:41.265734   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:41.765404   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:42.265993   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:42.765826   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:43.265776   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:43.353243   66755 kubeadm.go:1113] duration metric: took 4.811892444s to wait for elevateKubeSystemPrivileges
	I1004 04:28:43.353288   66755 kubeadm.go:394] duration metric: took 5m2.586827656s to StartCluster
	I1004 04:28:43.353313   66755 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:28:43.353402   66755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:28:43.355058   66755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:28:43.355309   66755 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:28:43.355388   66755 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:28:43.355533   66755 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-934812"
	I1004 04:28:43.355542   66755 addons.go:69] Setting default-storageclass=true in profile "embed-certs-934812"
	I1004 04:28:43.355556   66755 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-934812"
	I1004 04:28:43.355563   66755 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-934812"
	W1004 04:28:43.355568   66755 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:28:43.355584   66755 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:28:43.355598   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.355639   66755 addons.go:69] Setting metrics-server=true in profile "embed-certs-934812"
	I1004 04:28:43.355658   66755 addons.go:234] Setting addon metrics-server=true in "embed-certs-934812"
	W1004 04:28:43.355666   66755 addons.go:243] addon metrics-server should already be in state true
	I1004 04:28:43.355694   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.356024   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356075   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356095   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.356108   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.356075   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356173   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.357087   66755 out.go:177] * Verifying Kubernetes components...
	I1004 04:28:43.358428   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:28:43.373646   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45715
	I1004 04:28:43.373874   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I1004 04:28:43.374344   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.374344   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.374927   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.374948   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.375003   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.375027   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.375285   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.375342   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.375499   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.375884   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.375928   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.376269   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I1004 04:28:43.376636   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.377073   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.377099   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.377455   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.377883   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.377918   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.378402   66755 addons.go:234] Setting addon default-storageclass=true in "embed-certs-934812"
	W1004 04:28:43.378420   66755 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:28:43.378447   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.378705   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.378734   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.394001   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I1004 04:28:43.394289   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I1004 04:28:43.394645   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.394760   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.395195   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.395213   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.395302   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.395317   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.395596   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.395626   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.395842   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.396120   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.396160   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.397590   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.399391   66755 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:28:43.400581   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:28:43.400598   66755 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:28:43.400619   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.405134   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.405778   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I1004 04:28:43.405968   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.405996   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.406230   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.406383   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.406428   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.406571   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.406698   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.406825   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.406847   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.407455   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.407600   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.409278   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.411006   66755 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:28:40.622426   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:42.623400   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:43.412106   66755 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:28:43.412124   66755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:28:43.412389   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.414167   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I1004 04:28:43.414796   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.415285   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.415309   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.415657   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.415710   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.415911   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.416195   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.416217   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.416440   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.416628   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.416759   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.416856   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.418235   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.418426   66755 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:28:43.418436   66755 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:28:43.418456   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.421305   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.421761   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.421779   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.421966   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.422654   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.422789   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.422877   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.580648   66755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:28:43.615728   66755 node_ready.go:35] waiting up to 6m0s for node "embed-certs-934812" to be "Ready" ...
	I1004 04:28:43.625558   66755 node_ready.go:49] node "embed-certs-934812" has status "Ready":"True"
	I1004 04:28:43.625600   66755 node_ready.go:38] duration metric: took 9.827384ms for node "embed-certs-934812" to be "Ready" ...
	I1004 04:28:43.625612   66755 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:43.634425   66755 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:43.748926   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:28:43.774727   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:28:43.781558   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:28:43.781589   66755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:28:43.838039   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:28:43.838067   66755 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:28:43.945364   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:28:43.945392   66755 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:28:44.005000   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:28:44.253491   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.253521   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.253828   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.253896   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.253910   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.253925   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.253938   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.254130   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.254149   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.254164   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.267367   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.267396   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.267680   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.267700   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.864663   66755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089890385s)
	I1004 04:28:44.864722   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.864734   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.865046   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.865070   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.865086   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.865095   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.866872   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.866877   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.866907   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.138868   66755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133828074s)
	I1004 04:28:45.138926   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:45.138942   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:45.139243   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:45.139265   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.139276   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:45.139283   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:45.139484   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:45.139497   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.139507   66755 addons.go:475] Verifying addon metrics-server=true in "embed-certs-934812"
	I1004 04:28:45.141046   66755 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 04:28:44.147013   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:44.648117   67541 pod_ready.go:82] duration metric: took 4m0.007930603s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	E1004 04:28:44.648144   67541 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 04:28:44.648154   67541 pod_ready.go:39] duration metric: took 4m7.419382357s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:44.648170   67541 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:28:44.648200   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:44.648256   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:44.712473   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:44.712500   67541 cri.go:89] found id: ""
	I1004 04:28:44.712510   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:44.712568   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.717619   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:44.717688   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:44.760036   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:44.760061   67541 cri.go:89] found id: ""
	I1004 04:28:44.760071   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:44.760124   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.766402   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:44.766465   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:44.821766   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:44.821792   67541 cri.go:89] found id: ""
	I1004 04:28:44.821801   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:44.821858   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.826315   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:44.826370   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:44.873526   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:44.873547   67541 cri.go:89] found id: ""
	I1004 04:28:44.873556   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:44.873615   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.878375   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:44.878442   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:44.920240   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:44.920261   67541 cri.go:89] found id: ""
	I1004 04:28:44.920270   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:44.920322   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.925102   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:44.925158   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:44.967386   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:44.967406   67541 cri.go:89] found id: ""
	I1004 04:28:44.967416   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:44.967471   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.971979   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:44.972056   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:45.009842   67541 cri.go:89] found id: ""
	I1004 04:28:45.009869   67541 logs.go:282] 0 containers: []
	W1004 04:28:45.009881   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:45.009890   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:45.009952   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:45.055166   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:45.055189   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:45.055194   67541 cri.go:89] found id: ""
	I1004 04:28:45.055201   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:45.055258   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:45.060362   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:45.066118   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:45.066351   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:45.128185   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:45.128221   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:45.270042   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:45.270084   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:45.309065   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:45.309093   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:45.352299   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:45.352327   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:45.401846   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:45.401882   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:45.447474   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:45.447530   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:45.500734   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:45.500765   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:46.040224   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:46.040275   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:46.112675   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:46.112716   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:46.128530   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:46.128553   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:46.175007   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:46.175039   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:46.222706   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:46.222738   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:44.623804   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:47.122548   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:45.142166   66755 addons.go:510] duration metric: took 1.786788452s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 04:28:45.642731   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:46.641705   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.641730   66755 pod_ready.go:82] duration metric: took 3.007270041s for pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.641743   66755 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.646744   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.646767   66755 pod_ready.go:82] duration metric: took 5.01485ms for pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.646777   66755 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.652554   66755 pod_ready.go:93] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.652572   66755 pod_ready.go:82] duration metric: took 5.78883ms for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.652580   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:48.659404   66755 pod_ready.go:103] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:51.158765   66755 pod_ready.go:93] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.158787   66755 pod_ready.go:82] duration metric: took 4.506200726s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.158796   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.162949   66755 pod_ready.go:93] pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.162967   66755 pod_ready.go:82] duration metric: took 4.16468ms for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.162975   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9czbc" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.167309   66755 pod_ready.go:93] pod "kube-proxy-9czbc" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.167327   66755 pod_ready.go:82] duration metric: took 4.347415ms for pod "kube-proxy-9czbc" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.167334   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.171048   66755 pod_ready.go:93] pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.171065   66755 pod_ready.go:82] duration metric: took 3.724785ms for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.171071   66755 pod_ready.go:39] duration metric: took 7.545445402s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:51.171083   66755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:28:51.171126   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:51.186751   66755 api_server.go:72] duration metric: took 7.831380288s to wait for apiserver process to appear ...
	I1004 04:28:51.186782   66755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:28:51.186799   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:28:51.192753   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 200:
	ok
	I1004 04:28:51.194259   66755 api_server.go:141] control plane version: v1.31.1
	I1004 04:28:51.194284   66755 api_server.go:131] duration metric: took 7.491456ms to wait for apiserver health ...
	I1004 04:28:51.194292   66755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:28:51.241469   66755 system_pods.go:59] 9 kube-system pods found
	I1004 04:28:51.241491   66755 system_pods.go:61] "coredns-7c65d6cfc9-h5tbr" [87deb61f-2ce4-4d45-91da-c16557b5ef75] Running
	I1004 04:28:51.241496   66755 system_pods.go:61] "coredns-7c65d6cfc9-p52s6" [b9b3cd7f-f28e-4502-a55d-7792cfa5a6fe] Running
	I1004 04:28:51.241500   66755 system_pods.go:61] "etcd-embed-certs-934812" [487917e4-1b38-4b84-baf9-eacc1113718e] Running
	I1004 04:28:51.241503   66755 system_pods.go:61] "kube-apiserver-embed-certs-934812" [7fcfc483-3c53-415d-8329-5cae1ecd022f] Running
	I1004 04:28:51.241507   66755 system_pods.go:61] "kube-controller-manager-embed-certs-934812" [9ab25b16-916b-49f7-b34d-a12cf8552769] Running
	I1004 04:28:51.241514   66755 system_pods.go:61] "kube-proxy-9czbc" [dedff5a2-62b6-49c3-8369-9182d1c5bf7a] Running
	I1004 04:28:51.241517   66755 system_pods.go:61] "kube-scheduler-embed-certs-934812" [88a5acd5-108f-482a-ac24-7b75a30428ff] Running
	I1004 04:28:51.241525   66755 system_pods.go:61] "metrics-server-6867b74b74-fh2lk" [12e3e884-2ad3-4eaa-a505-822717e5bc8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:51.241528   66755 system_pods.go:61] "storage-provisioner" [67b4ef22-068c-4d14-840e-deab91c5ab94] Running
	I1004 04:28:51.241534   66755 system_pods.go:74] duration metric: took 47.237476ms to wait for pod list to return data ...
	I1004 04:28:51.241541   66755 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:28:51.438932   66755 default_sa.go:45] found service account: "default"
	I1004 04:28:51.438957   66755 default_sa.go:55] duration metric: took 197.410206ms for default service account to be created ...
	I1004 04:28:51.438966   66755 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:28:51.642064   66755 system_pods.go:86] 9 kube-system pods found
	I1004 04:28:51.642091   66755 system_pods.go:89] "coredns-7c65d6cfc9-h5tbr" [87deb61f-2ce4-4d45-91da-c16557b5ef75] Running
	I1004 04:28:51.642095   66755 system_pods.go:89] "coredns-7c65d6cfc9-p52s6" [b9b3cd7f-f28e-4502-a55d-7792cfa5a6fe] Running
	I1004 04:28:51.642100   66755 system_pods.go:89] "etcd-embed-certs-934812" [487917e4-1b38-4b84-baf9-eacc1113718e] Running
	I1004 04:28:51.642103   66755 system_pods.go:89] "kube-apiserver-embed-certs-934812" [7fcfc483-3c53-415d-8329-5cae1ecd022f] Running
	I1004 04:28:51.642107   66755 system_pods.go:89] "kube-controller-manager-embed-certs-934812" [9ab25b16-916b-49f7-b34d-a12cf8552769] Running
	I1004 04:28:51.642111   66755 system_pods.go:89] "kube-proxy-9czbc" [dedff5a2-62b6-49c3-8369-9182d1c5bf7a] Running
	I1004 04:28:51.642115   66755 system_pods.go:89] "kube-scheduler-embed-certs-934812" [88a5acd5-108f-482a-ac24-7b75a30428ff] Running
	I1004 04:28:51.642121   66755 system_pods.go:89] "metrics-server-6867b74b74-fh2lk" [12e3e884-2ad3-4eaa-a505-822717e5bc8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:51.642124   66755 system_pods.go:89] "storage-provisioner" [67b4ef22-068c-4d14-840e-deab91c5ab94] Running
	I1004 04:28:51.642133   66755 system_pods.go:126] duration metric: took 203.1616ms to wait for k8s-apps to be running ...
	I1004 04:28:51.642139   66755 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:28:51.642176   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:51.658916   66755 system_svc.go:56] duration metric: took 16.763146ms WaitForService to wait for kubelet
	I1004 04:28:51.658948   66755 kubeadm.go:582] duration metric: took 8.303579518s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:28:51.658964   66755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:28:51.839048   66755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:28:51.839067   66755 node_conditions.go:123] node cpu capacity is 2
	I1004 04:28:51.839076   66755 node_conditions.go:105] duration metric: took 180.108785ms to run NodePressure ...
	I1004 04:28:51.839086   66755 start.go:241] waiting for startup goroutines ...
	I1004 04:28:51.839093   66755 start.go:246] waiting for cluster config update ...
	I1004 04:28:51.839103   66755 start.go:255] writing updated cluster config ...
	I1004 04:28:51.839343   66755 ssh_runner.go:195] Run: rm -f paused
	I1004 04:28:51.887283   66755 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:28:51.889326   66755 out.go:177] * Done! kubectl is now configured to use "embed-certs-934812" cluster and "default" namespace by default
	I1004 04:28:48.765066   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:48.780955   67541 api_server.go:72] duration metric: took 4m18.802753607s to wait for apiserver process to appear ...
	I1004 04:28:48.780988   67541 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:28:48.781022   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:48.781074   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:48.817315   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:48.817337   67541 cri.go:89] found id: ""
	I1004 04:28:48.817346   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:48.817406   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.821619   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:48.821676   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:48.860019   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:48.860043   67541 cri.go:89] found id: ""
	I1004 04:28:48.860052   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:48.860101   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.864005   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:48.864065   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:48.901273   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:48.901295   67541 cri.go:89] found id: ""
	I1004 04:28:48.901303   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:48.901353   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.905950   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:48.906007   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:48.939708   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:48.939735   67541 cri.go:89] found id: ""
	I1004 04:28:48.939745   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:48.939812   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.943625   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:48.943692   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:48.979452   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:48.979481   67541 cri.go:89] found id: ""
	I1004 04:28:48.979490   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:48.979550   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.983629   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:48.983692   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:49.021137   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:49.021160   67541 cri.go:89] found id: ""
	I1004 04:28:49.021169   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:49.021242   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.025644   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:49.025712   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:49.062410   67541 cri.go:89] found id: ""
	I1004 04:28:49.062437   67541 logs.go:282] 0 containers: []
	W1004 04:28:49.062447   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:49.062452   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:49.062499   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:49.098959   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:49.098990   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:49.098996   67541 cri.go:89] found id: ""
	I1004 04:28:49.099005   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:49.099067   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.103474   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.107824   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:49.107852   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:49.228249   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:49.228278   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:49.269454   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:49.269479   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:49.305639   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:49.305666   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:49.770318   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:49.770348   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:49.808468   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:49.808493   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:49.884965   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:49.884997   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:49.901874   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:49.901898   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:49.952844   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:49.952869   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:49.986100   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:49.986141   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:50.023082   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:50.023108   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:50.074848   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:50.074876   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:50.112513   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:50.112541   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:52.658644   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:28:52.663076   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 200:
	ok
	I1004 04:28:52.663997   67541 api_server.go:141] control plane version: v1.31.1
	I1004 04:28:52.664017   67541 api_server.go:131] duration metric: took 3.8830221s to wait for apiserver health ...
	I1004 04:28:52.664024   67541 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:28:52.664045   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:52.664085   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:52.704174   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:52.704193   67541 cri.go:89] found id: ""
	I1004 04:28:52.704200   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:52.704253   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.708388   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:52.708438   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:52.743028   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:52.743053   67541 cri.go:89] found id: ""
	I1004 04:28:52.743062   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:52.743108   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.747354   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:52.747405   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:52.782350   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:52.782373   67541 cri.go:89] found id: ""
	I1004 04:28:52.782382   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:52.782424   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.786336   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:52.786394   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:52.826929   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:52.826950   67541 cri.go:89] found id: ""
	I1004 04:28:52.826958   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:52.827018   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.831039   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:52.831094   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:52.865963   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:52.865984   67541 cri.go:89] found id: ""
	I1004 04:28:52.865992   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:52.866032   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.869982   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:52.870024   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:52.919060   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:52.919081   67541 cri.go:89] found id: ""
	I1004 04:28:52.919091   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:52.919139   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.923080   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:52.923131   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:52.962615   67541 cri.go:89] found id: ""
	I1004 04:28:52.962636   67541 logs.go:282] 0 containers: []
	W1004 04:28:52.962643   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:52.962649   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:52.962706   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:52.999914   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:52.999936   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:52.999940   67541 cri.go:89] found id: ""
	I1004 04:28:52.999947   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:52.999998   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:53.003894   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:53.007759   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:53.007776   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:53.021269   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:53.021289   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:53.088683   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:53.088711   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:53.127363   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:53.127387   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:53.163467   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:53.163490   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:53.212683   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:53.212717   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:49.123892   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:51.124121   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:53.124323   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:49.686881   67282 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:28:49.687234   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:28:49.687487   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:28:53.569320   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:53.569360   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:53.644197   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:53.644231   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:53.747465   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:53.747497   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:53.788761   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:53.788798   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:53.822705   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:53.822737   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:53.857525   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:53.857548   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:53.894880   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:53.894904   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:56.455254   67541 system_pods.go:59] 8 kube-system pods found
	I1004 04:28:56.455286   67541 system_pods.go:61] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running
	I1004 04:28:56.455293   67541 system_pods.go:61] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running
	I1004 04:28:56.455299   67541 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running
	I1004 04:28:56.455304   67541 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running
	I1004 04:28:56.455309   67541 system_pods.go:61] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:28:56.455314   67541 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running
	I1004 04:28:56.455322   67541 system_pods.go:61] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:56.455329   67541 system_pods.go:61] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:28:56.455338   67541 system_pods.go:74] duration metric: took 3.791308758s to wait for pod list to return data ...
	I1004 04:28:56.455347   67541 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:28:56.457799   67541 default_sa.go:45] found service account: "default"
	I1004 04:28:56.457817   67541 default_sa.go:55] duration metric: took 2.463452ms for default service account to be created ...
	I1004 04:28:56.457825   67541 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:28:56.462569   67541 system_pods.go:86] 8 kube-system pods found
	I1004 04:28:56.462593   67541 system_pods.go:89] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running
	I1004 04:28:56.462601   67541 system_pods.go:89] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running
	I1004 04:28:56.462608   67541 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running
	I1004 04:28:56.462615   67541 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running
	I1004 04:28:56.462620   67541 system_pods.go:89] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:28:56.462626   67541 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running
	I1004 04:28:56.462632   67541 system_pods.go:89] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:56.462637   67541 system_pods.go:89] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:28:56.462645   67541 system_pods.go:126] duration metric: took 4.814032ms to wait for k8s-apps to be running ...
	I1004 04:28:56.462657   67541 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:28:56.462749   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:56.478944   67541 system_svc.go:56] duration metric: took 16.282384ms WaitForService to wait for kubelet
	I1004 04:28:56.478966   67541 kubeadm.go:582] duration metric: took 4m26.500769346s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:28:56.478982   67541 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:28:56.481946   67541 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:28:56.481968   67541 node_conditions.go:123] node cpu capacity is 2
	I1004 04:28:56.481980   67541 node_conditions.go:105] duration metric: took 2.992423ms to run NodePressure ...
	I1004 04:28:56.481993   67541 start.go:241] waiting for startup goroutines ...
	I1004 04:28:56.482006   67541 start.go:246] waiting for cluster config update ...
	I1004 04:28:56.482018   67541 start.go:255] writing updated cluster config ...
	I1004 04:28:56.482450   67541 ssh_runner.go:195] Run: rm -f paused
	I1004 04:28:56.528299   67541 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:28:56.530289   67541 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-281471" cluster and "default" namespace by default
	I1004 04:28:55.625569   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:58.122544   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:54.687773   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:28:54.688026   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:29:00.124374   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:02.624622   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:05.123726   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:07.622036   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:04.688599   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:29:04.688808   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:29:09.623060   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:11.623590   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:12.123919   66293 pod_ready.go:82] duration metric: took 4m0.007496621s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	E1004 04:29:12.123939   66293 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 04:29:12.123946   66293 pod_ready.go:39] duration metric: took 4m3.607239118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:29:12.123960   66293 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:29:12.123985   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:12.124023   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:12.174748   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:12.174767   66293 cri.go:89] found id: ""
	I1004 04:29:12.174775   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:12.174823   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.179374   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:12.179436   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:12.219617   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:12.219637   66293 cri.go:89] found id: ""
	I1004 04:29:12.219646   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:12.219699   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.223774   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:12.223844   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:12.261339   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:12.261360   66293 cri.go:89] found id: ""
	I1004 04:29:12.261369   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:12.261424   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.265364   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:12.265414   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:12.313178   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:12.313197   66293 cri.go:89] found id: ""
	I1004 04:29:12.313206   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:12.313271   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.317440   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:12.317498   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:12.353037   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:12.353054   66293 cri.go:89] found id: ""
	I1004 04:29:12.353072   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:12.353125   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.357212   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:12.357272   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:12.392082   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:12.392106   66293 cri.go:89] found id: ""
	I1004 04:29:12.392115   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:12.392167   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.396333   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:12.396395   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:12.439298   66293 cri.go:89] found id: ""
	I1004 04:29:12.439329   66293 logs.go:282] 0 containers: []
	W1004 04:29:12.439337   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:12.439343   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:12.439387   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:12.478798   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:12.478814   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:12.478818   66293 cri.go:89] found id: ""
	I1004 04:29:12.478824   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:12.478866   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.483035   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.486977   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:12.486992   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:12.520849   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:12.520875   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:13.072628   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:13.072671   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:13.137973   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:13.138000   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:13.259585   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:13.259611   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:13.312315   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:13.312340   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:13.352351   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:13.352377   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:13.391319   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:13.391352   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:13.430681   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:13.430712   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:13.464929   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:13.464957   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:13.505312   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:13.505340   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:13.520476   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:13.520517   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:13.582723   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:13.582752   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:16.131437   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:29:16.150426   66293 api_server.go:72] duration metric: took 4m14.921074088s to wait for apiserver process to appear ...
	I1004 04:29:16.150457   66293 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:29:16.150498   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:16.150559   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:16.197236   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:16.197265   66293 cri.go:89] found id: ""
	I1004 04:29:16.197275   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:16.197341   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.202103   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:16.202187   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:16.236881   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:16.236907   66293 cri.go:89] found id: ""
	I1004 04:29:16.236916   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:16.236976   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.241220   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:16.241289   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:16.275727   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:16.275750   66293 cri.go:89] found id: ""
	I1004 04:29:16.275759   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:16.275828   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.280282   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:16.280352   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:16.320297   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:16.320323   66293 cri.go:89] found id: ""
	I1004 04:29:16.320332   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:16.320386   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.324982   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:16.325038   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:16.367062   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:16.367081   66293 cri.go:89] found id: ""
	I1004 04:29:16.367089   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:16.367143   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.371124   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:16.371182   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:16.405706   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:16.405728   66293 cri.go:89] found id: ""
	I1004 04:29:16.405738   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:16.405785   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.410027   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:16.410084   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:16.444937   66293 cri.go:89] found id: ""
	I1004 04:29:16.444961   66293 logs.go:282] 0 containers: []
	W1004 04:29:16.444971   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:16.444978   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:16.445032   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:16.480123   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:16.480153   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:16.480160   66293 cri.go:89] found id: ""
	I1004 04:29:16.480168   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:16.480228   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.484216   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.488156   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:16.488177   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:16.501573   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:16.501591   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:16.600789   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:16.600814   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:16.641604   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:16.641634   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:16.696735   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:16.696764   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:16.737153   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:16.737177   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:17.188490   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:17.188546   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:17.262072   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:17.262108   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:17.310881   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:17.310911   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:17.356105   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:17.356135   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:17.398916   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:17.398948   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:17.440122   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:17.440149   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:17.482529   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:17.482553   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:20.034163   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:29:20.039165   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I1004 04:29:20.040105   66293 api_server.go:141] control plane version: v1.31.1
	I1004 04:29:20.040124   66293 api_server.go:131] duration metric: took 3.889660333s to wait for apiserver health ...
	I1004 04:29:20.040131   66293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:29:20.040156   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:20.040203   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:20.078208   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:20.078234   66293 cri.go:89] found id: ""
	I1004 04:29:20.078244   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:20.078306   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.082751   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:20.082808   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:20.128002   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:20.128024   66293 cri.go:89] found id: ""
	I1004 04:29:20.128034   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:20.128084   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.132039   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:20.132097   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:20.171887   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:20.171911   66293 cri.go:89] found id: ""
	I1004 04:29:20.171921   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:20.171978   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.176095   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:20.176150   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:20.215155   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:20.215175   66293 cri.go:89] found id: ""
	I1004 04:29:20.215183   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:20.215241   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.219738   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:20.219814   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:20.256116   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:20.256134   66293 cri.go:89] found id: ""
	I1004 04:29:20.256142   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:20.256194   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.261201   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:20.261281   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:20.302328   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:20.302350   66293 cri.go:89] found id: ""
	I1004 04:29:20.302359   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:20.302414   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.306488   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:20.306551   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:20.341266   66293 cri.go:89] found id: ""
	I1004 04:29:20.341290   66293 logs.go:282] 0 containers: []
	W1004 04:29:20.341300   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:20.341307   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:20.341361   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:20.379560   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:20.379584   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:20.379589   66293 cri.go:89] found id: ""
	I1004 04:29:20.379598   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:20.379653   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.383816   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.388118   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:20.388137   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:20.487661   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:20.487686   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:20.539728   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:20.539754   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:20.577435   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:20.577463   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:20.616450   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:20.616480   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:20.658292   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:20.658316   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:20.733483   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:20.733515   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:20.749004   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:20.749033   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:20.799355   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:20.799383   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:20.839676   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:20.839699   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:20.874870   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:20.874896   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:20.912635   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:20.912658   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:20.968377   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:20.968405   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:23.820462   66293 system_pods.go:59] 8 kube-system pods found
	I1004 04:29:23.820491   66293 system_pods.go:61] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running
	I1004 04:29:23.820497   66293 system_pods.go:61] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running
	I1004 04:29:23.820501   66293 system_pods.go:61] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running
	I1004 04:29:23.820506   66293 system_pods.go:61] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running
	I1004 04:29:23.820514   66293 system_pods.go:61] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running
	I1004 04:29:23.820517   66293 system_pods.go:61] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running
	I1004 04:29:23.820524   66293 system_pods.go:61] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:29:23.820529   66293 system_pods.go:61] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running
	I1004 04:29:23.820537   66293 system_pods.go:74] duration metric: took 3.780400092s to wait for pod list to return data ...
	I1004 04:29:23.820544   66293 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:29:23.823119   66293 default_sa.go:45] found service account: "default"
	I1004 04:29:23.823137   66293 default_sa.go:55] duration metric: took 2.58707ms for default service account to be created ...
	I1004 04:29:23.823144   66293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:29:23.827365   66293 system_pods.go:86] 8 kube-system pods found
	I1004 04:29:23.827385   66293 system_pods.go:89] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running
	I1004 04:29:23.827389   66293 system_pods.go:89] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running
	I1004 04:29:23.827393   66293 system_pods.go:89] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running
	I1004 04:29:23.827397   66293 system_pods.go:89] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running
	I1004 04:29:23.827400   66293 system_pods.go:89] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running
	I1004 04:29:23.827405   66293 system_pods.go:89] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running
	I1004 04:29:23.827410   66293 system_pods.go:89] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:29:23.827415   66293 system_pods.go:89] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running
	I1004 04:29:23.827422   66293 system_pods.go:126] duration metric: took 4.27475ms to wait for k8s-apps to be running ...
	I1004 04:29:23.827428   66293 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:29:23.827468   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:29:23.844696   66293 system_svc.go:56] duration metric: took 17.261418ms WaitForService to wait for kubelet
	I1004 04:29:23.844724   66293 kubeadm.go:582] duration metric: took 4m22.61537826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:29:23.844746   66293 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:29:23.847873   66293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:29:23.847892   66293 node_conditions.go:123] node cpu capacity is 2
	I1004 04:29:23.847902   66293 node_conditions.go:105] duration metric: took 3.149916ms to run NodePressure ...
	I1004 04:29:23.847915   66293 start.go:241] waiting for startup goroutines ...
	I1004 04:29:23.847923   66293 start.go:246] waiting for cluster config update ...
	I1004 04:29:23.847932   66293 start.go:255] writing updated cluster config ...
	I1004 04:29:23.848202   66293 ssh_runner.go:195] Run: rm -f paused
	I1004 04:29:23.894092   66293 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:29:23.895736   66293 out.go:177] * Done! kubectl is now configured to use "no-preload-658545" cluster and "default" namespace by default
	I1004 04:29:24.690241   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:29:24.690419   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:04.692816   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:04.693091   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:04.693114   67282 kubeadm.go:310] 
	I1004 04:30:04.693149   67282 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:30:04.693214   67282 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:30:04.693236   67282 kubeadm.go:310] 
	I1004 04:30:04.693295   67282 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:30:04.693327   67282 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:30:04.693451   67282 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:30:04.693460   67282 kubeadm.go:310] 
	I1004 04:30:04.693568   67282 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:30:04.693614   67282 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:30:04.693668   67282 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:30:04.693688   67282 kubeadm.go:310] 
	I1004 04:30:04.693843   67282 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:30:04.693966   67282 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:30:04.693982   67282 kubeadm.go:310] 
	I1004 04:30:04.694097   67282 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:30:04.694218   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:30:04.694305   67282 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:30:04.694387   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:30:04.694399   67282 kubeadm.go:310] 
	I1004 04:30:04.695379   67282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:30:04.695478   67282 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:30:04.695566   67282 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1004 04:30:04.695695   67282 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1004 04:30:04.695742   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:30:05.153635   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:30:05.170057   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:30:05.179541   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:30:05.179563   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:30:05.179611   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:30:05.188969   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:30:05.189025   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:30:05.198049   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:30:05.207031   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:30:05.207118   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:30:05.216934   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:30:05.226477   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:30:05.226541   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:30:05.236222   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:30:05.245314   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:30:05.245374   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:30:05.255762   67282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:30:05.329816   67282 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:30:05.329953   67282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:30:05.482342   67282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:30:05.482549   67282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:30:05.482692   67282 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:30:05.666400   67282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:30:05.668115   67282 out.go:235]   - Generating certificates and keys ...
	I1004 04:30:05.668217   67282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:30:05.668319   67282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:30:05.668460   67282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:30:05.668562   67282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:30:05.668660   67282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:30:05.668734   67282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:30:05.668823   67282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:30:05.668905   67282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:30:05.669010   67282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:30:05.669130   67282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:30:05.669186   67282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:30:05.669269   67282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:30:05.773446   67282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:30:05.823736   67282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:30:05.951294   67282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:30:06.250340   67282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:30:06.275797   67282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:30:06.276877   67282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:30:06.276944   67282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:30:06.437286   67282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:30:06.438849   67282 out.go:235]   - Booting up control plane ...
	I1004 04:30:06.438952   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:30:06.443688   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:30:06.444596   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:30:06.445267   67282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:30:06.457334   67282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:30:46.456706   67282 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:30:46.456854   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:46.457117   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:51.456986   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:51.457240   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:31:01.457062   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:31:01.457288   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:31:21.456976   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:31:21.457277   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:32:01.456978   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:32:01.457225   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:32:01.457249   67282 kubeadm.go:310] 
	I1004 04:32:01.457312   67282 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:32:01.457374   67282 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:32:01.457383   67282 kubeadm.go:310] 
	I1004 04:32:01.457434   67282 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:32:01.457512   67282 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:32:01.457678   67282 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:32:01.457692   67282 kubeadm.go:310] 
	I1004 04:32:01.457838   67282 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:32:01.457892   67282 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:32:01.457946   67282 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:32:01.457957   67282 kubeadm.go:310] 
	I1004 04:32:01.458102   67282 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:32:01.458217   67282 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:32:01.458233   67282 kubeadm.go:310] 
	I1004 04:32:01.458379   67282 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:32:01.458494   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:32:01.458604   67282 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:32:01.458699   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:32:01.458710   67282 kubeadm.go:310] 
	I1004 04:32:01.459157   67282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:32:01.459272   67282 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:32:01.459386   67282 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1004 04:32:01.459464   67282 kubeadm.go:394] duration metric: took 7m57.553695137s to StartCluster
	I1004 04:32:01.459522   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:32:01.459586   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:32:01.500997   67282 cri.go:89] found id: ""
	I1004 04:32:01.501026   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.501037   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:32:01.501044   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:32:01.501102   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:32:01.537240   67282 cri.go:89] found id: ""
	I1004 04:32:01.537276   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.537288   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:32:01.537295   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:32:01.537349   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:32:01.573959   67282 cri.go:89] found id: ""
	I1004 04:32:01.573995   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.574007   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:32:01.574013   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:32:01.574074   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:32:01.610614   67282 cri.go:89] found id: ""
	I1004 04:32:01.610645   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.610657   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:32:01.610665   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:32:01.610716   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:32:01.645520   67282 cri.go:89] found id: ""
	I1004 04:32:01.645554   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.645567   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:32:01.645574   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:32:01.645640   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:32:01.679787   67282 cri.go:89] found id: ""
	I1004 04:32:01.679814   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.679823   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:32:01.679828   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:32:01.679873   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:32:01.714860   67282 cri.go:89] found id: ""
	I1004 04:32:01.714883   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.714891   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:32:01.714897   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:32:01.714952   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:32:01.761170   67282 cri.go:89] found id: ""
	I1004 04:32:01.761198   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.761208   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:32:01.761220   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:32:01.761232   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:32:01.822966   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:32:01.823006   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:32:01.839482   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:32:01.839510   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:32:01.917863   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:32:01.917887   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:32:01.917901   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:32:02.027216   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:32:02.027247   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1004 04:32:02.069804   67282 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1004 04:32:02.069852   67282 out.go:270] * 
	W1004 04:32:02.069922   67282 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:32:02.069939   67282 out.go:270] * 
	W1004 04:32:02.070740   67282 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 04:32:02.074308   67282 out.go:201] 
	W1004 04:32:02.075387   67282 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:32:02.075427   67282 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1004 04:32:02.075458   67282 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1004 04:32:02.076675   67282 out.go:201] 
	
	
	==> CRI-O <==
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.483664565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016678483643659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92f0da49-4a57-493d-a151-399d1063060b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.484376913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84359b82-85b6-4f00-bf13-485890a69be0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.484446836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84359b82-85b6-4f00-bf13-485890a69be0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.484862658Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015897990347963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858a12b10e9638b4d7e2414bd17ddd89695f92cd0560c6772c3d4fc7b17fa26d,PodSandboxId:1ca0b9c5830865ffa646cc1fac486a97d0641139b4387c581e4aa8b99d73698b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015886290459896,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bf12a9c-f04f-41fe-803a-88cc8e2e2219,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0,PodSandboxId:1274099de2596d78e48eae95734d2841408b0c9aef23d6d3b582a8b17be116ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015882862374539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wz6rd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6936a096-4173-4f58-aa65-001ea438e3a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754,PodSandboxId:608f4b5a81f87489b980f3b3b5fa78db9962da2f9569dde6cdc52d80ff6e08ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015867217268118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nnld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e045721-1
f51-44cd-afc7-acf8e4ce6845,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015867184182408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0
-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40,PodSandboxId:ee10ec4f78c5cb361eee404342a9d09c04940c3425b462e1ed9acf1d31f94d7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015863471854467,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0eeecb53a740d563832e2d4d843fd7f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0,PodSandboxId:f58ec2a6750cd14e1558ac87b3a64744b91ae9450da547d45fcbd7c8dcf68029,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015863419449097,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0899f001d25e977da46f3ca1
dadae4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3,PodSandboxId:ad5a7fcc3f358b82422e4de43966947e20e13eddc68e23b5a7699b1c92287df8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015863438032752,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347bb00aa2cf3b8a269f04ff13dd
6a5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500,PodSandboxId:eed40da104c75b93cb67d7aa80dc3d84e25c2f1d8be9a48e13e54058652b600c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015863374865664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f77da40e3c60520ed0857cb3dfa36e
06,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84359b82-85b6-4f00-bf13-485890a69be0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.525002928Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d028019-ac48-46de-ac7d-3bc7d1b94468 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.525073879Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d028019-ac48-46de-ac7d-3bc7d1b94468 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.526105093Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59d96aeb-6e6e-4e1d-a799-82ee529862d7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.526486096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016678526464841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59d96aeb-6e6e-4e1d-a799-82ee529862d7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.527363104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05e78718-a793-4872-b398-db8970d00375 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.527415009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05e78718-a793-4872-b398-db8970d00375 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.527708795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015897990347963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858a12b10e9638b4d7e2414bd17ddd89695f92cd0560c6772c3d4fc7b17fa26d,PodSandboxId:1ca0b9c5830865ffa646cc1fac486a97d0641139b4387c581e4aa8b99d73698b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015886290459896,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bf12a9c-f04f-41fe-803a-88cc8e2e2219,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0,PodSandboxId:1274099de2596d78e48eae95734d2841408b0c9aef23d6d3b582a8b17be116ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015882862374539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wz6rd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6936a096-4173-4f58-aa65-001ea438e3a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754,PodSandboxId:608f4b5a81f87489b980f3b3b5fa78db9962da2f9569dde6cdc52d80ff6e08ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015867217268118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nnld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e045721-1
f51-44cd-afc7-acf8e4ce6845,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015867184182408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0
-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40,PodSandboxId:ee10ec4f78c5cb361eee404342a9d09c04940c3425b462e1ed9acf1d31f94d7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015863471854467,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0eeecb53a740d563832e2d4d843fd7f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0,PodSandboxId:f58ec2a6750cd14e1558ac87b3a64744b91ae9450da547d45fcbd7c8dcf68029,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015863419449097,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0899f001d25e977da46f3ca1
dadae4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3,PodSandboxId:ad5a7fcc3f358b82422e4de43966947e20e13eddc68e23b5a7699b1c92287df8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015863438032752,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347bb00aa2cf3b8a269f04ff13dd
6a5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500,PodSandboxId:eed40da104c75b93cb67d7aa80dc3d84e25c2f1d8be9a48e13e54058652b600c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015863374865664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f77da40e3c60520ed0857cb3dfa36e
06,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05e78718-a793-4872-b398-db8970d00375 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.563939757Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16dc3f67-6234-44ab-bc4c-9fe6f0be1aae name=/runtime.v1.RuntimeService/Version
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.564049409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16dc3f67-6234-44ab-bc4c-9fe6f0be1aae name=/runtime.v1.RuntimeService/Version
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.565051200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c072b829-b182-4c82-b3d9-18ffb08eb2d4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.565892548Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016678565819632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c072b829-b182-4c82-b3d9-18ffb08eb2d4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.566495698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24a78cb9-2c6b-410e-bfb4-c1f767170f91 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.566601686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24a78cb9-2c6b-410e-bfb4-c1f767170f91 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.566817712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015897990347963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858a12b10e9638b4d7e2414bd17ddd89695f92cd0560c6772c3d4fc7b17fa26d,PodSandboxId:1ca0b9c5830865ffa646cc1fac486a97d0641139b4387c581e4aa8b99d73698b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015886290459896,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bf12a9c-f04f-41fe-803a-88cc8e2e2219,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0,PodSandboxId:1274099de2596d78e48eae95734d2841408b0c9aef23d6d3b582a8b17be116ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015882862374539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wz6rd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6936a096-4173-4f58-aa65-001ea438e3a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754,PodSandboxId:608f4b5a81f87489b980f3b3b5fa78db9962da2f9569dde6cdc52d80ff6e08ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015867217268118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nnld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e045721-1
f51-44cd-afc7-acf8e4ce6845,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015867184182408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0
-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40,PodSandboxId:ee10ec4f78c5cb361eee404342a9d09c04940c3425b462e1ed9acf1d31f94d7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015863471854467,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0eeecb53a740d563832e2d4d843fd7f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0,PodSandboxId:f58ec2a6750cd14e1558ac87b3a64744b91ae9450da547d45fcbd7c8dcf68029,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015863419449097,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0899f001d25e977da46f3ca1
dadae4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3,PodSandboxId:ad5a7fcc3f358b82422e4de43966947e20e13eddc68e23b5a7699b1c92287df8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015863438032752,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347bb00aa2cf3b8a269f04ff13dd
6a5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500,PodSandboxId:eed40da104c75b93cb67d7aa80dc3d84e25c2f1d8be9a48e13e54058652b600c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015863374865664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f77da40e3c60520ed0857cb3dfa36e
06,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24a78cb9-2c6b-410e-bfb4-c1f767170f91 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.598324936Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee82da98-ec6a-483d-85e2-16c14a3d2c47 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.598397409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee82da98-ec6a-483d-85e2-16c14a3d2c47 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.599597038Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1a63bb0-748d-4533-ab10-9b2d67a30ff3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.600170625Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016678600112628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1a63bb0-748d-4533-ab10-9b2d67a30ff3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.600782958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22423cd3-10f9-4861-9d1c-a40fc0bd2398 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.600835728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22423cd3-10f9-4861-9d1c-a40fc0bd2398 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:37:58 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:37:58.601011309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015897990347963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858a12b10e9638b4d7e2414bd17ddd89695f92cd0560c6772c3d4fc7b17fa26d,PodSandboxId:1ca0b9c5830865ffa646cc1fac486a97d0641139b4387c581e4aa8b99d73698b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015886290459896,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bf12a9c-f04f-41fe-803a-88cc8e2e2219,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0,PodSandboxId:1274099de2596d78e48eae95734d2841408b0c9aef23d6d3b582a8b17be116ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015882862374539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wz6rd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6936a096-4173-4f58-aa65-001ea438e3a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754,PodSandboxId:608f4b5a81f87489b980f3b3b5fa78db9962da2f9569dde6cdc52d80ff6e08ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015867217268118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nnld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e045721-1
f51-44cd-afc7-acf8e4ce6845,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015867184182408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0
-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40,PodSandboxId:ee10ec4f78c5cb361eee404342a9d09c04940c3425b462e1ed9acf1d31f94d7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015863471854467,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0eeecb53a740d563832e2d4d843fd7f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0,PodSandboxId:f58ec2a6750cd14e1558ac87b3a64744b91ae9450da547d45fcbd7c8dcf68029,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015863419449097,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0899f001d25e977da46f3ca1
dadae4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3,PodSandboxId:ad5a7fcc3f358b82422e4de43966947e20e13eddc68e23b5a7699b1c92287df8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015863438032752,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347bb00aa2cf3b8a269f04ff13dd
6a5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500,PodSandboxId:eed40da104c75b93cb67d7aa80dc3d84e25c2f1d8be9a48e13e54058652b600c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015863374865664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f77da40e3c60520ed0857cb3dfa36e
06,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22423cd3-10f9-4861-9d1c-a40fc0bd2398 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ec898e33ba398       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   3f576cb1d451b       storage-provisioner
	858a12b10e963       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   1ca0b9c583086       busybox
	7c6d3555bccdd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   1274099de2596       coredns-7c65d6cfc9-wz6rd
	387473e4357dc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   608f4b5a81f87       kube-proxy-4nnld
	d2d04e275366a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   3f576cb1d451b       storage-provisioner
	d889ba1109ff2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   ee10ec4f78c5c       kube-controller-manager-default-k8s-diff-port-281471
	59f9dd635170a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   ad5a7fcc3f358       kube-scheduler-default-k8s-diff-port-281471
	fe3375782091c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   f58ec2a6750cd       etcd-default-k8s-diff-port-281471
	8e5ab1b72e413       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   eed40da104c75       kube-apiserver-default-k8s-diff-port-281471
	
	
	==> coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33238 - 32232 "HINFO IN 6507743067045154330.9083972573469339683. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014209041s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-281471
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-281471
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=default-k8s-diff-port-281471
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T04_18_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 04:18:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-281471
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 04:37:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 04:35:09 +0000   Fri, 04 Oct 2024 04:18:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 04:35:09 +0000   Fri, 04 Oct 2024 04:18:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 04:35:09 +0000   Fri, 04 Oct 2024 04:18:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 04:35:09 +0000   Fri, 04 Oct 2024 04:24:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    default-k8s-diff-port-281471
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c1ffe0bbcc447ad9a342c41ec9f8913
	  System UUID:                5c1ffe0b-bcc4-47ad-9a34-2c41ec9f8913
	  Boot ID:                    62a49ed2-5300-43d2-afd5-efe7c53cf70c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-7c65d6cfc9-wz6rd                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 etcd-default-k8s-diff-port-281471                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-281471             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-281471    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-4nnld                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-281471             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-6867b74b74-f6qhr                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         19m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-281471 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-281471 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-281471 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-281471 status is now: NodeReady
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-281471 event: Registered Node default-k8s-diff-port-281471 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-281471 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-281471 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-281471 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-281471 event: Registered Node default-k8s-diff-port-281471 in Controller
	
	
	==> dmesg <==
	[Oct 4 04:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055222] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040588] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct 4 04:24] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.553627] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.602347] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.976974] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.059998] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067965] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.188511] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.148379] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.306488] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[  +4.495235] systemd-fstab-generator[788]: Ignoring "noauto" option for root device
	[  +0.063166] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.005662] systemd-fstab-generator[908]: Ignoring "noauto" option for root device
	[  +4.661682] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.920022] systemd-fstab-generator[1547]: Ignoring "noauto" option for root device
	[  +4.757043] kauditd_printk_skb: 64 callbacks suppressed
	[  +7.792493] kauditd_printk_skb: 11 callbacks suppressed
	[ +15.456146] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] <==
	{"level":"info","ts":"2024-10-04T04:24:23.947875Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:24:23.958313Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-04T04:24:23.958716Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7315e47f21b89457","initial-advertise-peer-urls":["https://192.168.39.201:2380"],"listen-peer-urls":["https://192.168.39.201:2380"],"advertise-client-urls":["https://192.168.39.201:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.201:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-04T04:24:23.958765Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-04T04:24:23.961000Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.201:2380"}
	{"level":"info","ts":"2024-10-04T04:24:23.961061Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.201:2380"}
	{"level":"info","ts":"2024-10-04T04:24:25.014639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-04T04:24:25.014790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-04T04:24:25.014862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 received MsgPreVoteResp from 7315e47f21b89457 at term 2"}
	{"level":"info","ts":"2024-10-04T04:24:25.014908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 became candidate at term 3"}
	{"level":"info","ts":"2024-10-04T04:24:25.014947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 received MsgVoteResp from 7315e47f21b89457 at term 3"}
	{"level":"info","ts":"2024-10-04T04:24:25.014982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 became leader at term 3"}
	{"level":"info","ts":"2024-10-04T04:24:25.015013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7315e47f21b89457 elected leader 7315e47f21b89457 at term 3"}
	{"level":"info","ts":"2024-10-04T04:24:25.019944Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7315e47f21b89457","local-member-attributes":"{Name:default-k8s-diff-port-281471 ClientURLs:[https://192.168.39.201:2379]}","request-path":"/0/members/7315e47f21b89457/attributes","cluster-id":"1777413e1d1fef45","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T04:24:25.020421Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:24:25.020600Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:24:25.024300Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:24:25.030634Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T04:24:25.030772Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T04:24:25.031281Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:24:25.030665Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.201:2379"}
	{"level":"info","ts":"2024-10-04T04:24:25.034905Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T04:34:25.090602Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":835}
	{"level":"info","ts":"2024-10-04T04:34:25.099237Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":835,"took":"8.257752ms","hash":3221089716,"current-db-size-bytes":2584576,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2584576,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-10-04T04:34:25.099340Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3221089716,"revision":835,"compact-revision":-1}
	
	
	==> kernel <==
	 04:37:58 up 14 min,  0 users,  load average: 0.09, 0.13, 0.09
	Linux default-k8s-diff-port-281471 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] <==
	E1004 04:34:27.704965       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1004 04:34:27.705121       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1004 04:34:27.706129       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:34:27.706154       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 04:35:27.707316       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:35:27.707611       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1004 04:35:27.707691       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:35:27.707728       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1004 04:35:27.708908       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:35:27.708981       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 04:37:27.710032       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:37:27.710340       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1004 04:37:27.710195       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:37:27.710384       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1004 04:37:27.711614       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:37:27.711677       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] <==
	E1004 04:32:30.183083       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:32:30.632403       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:33:00.188913       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:33:00.641468       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:33:30.195111       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:33:30.648867       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:34:00.201181       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:34:00.655007       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:34:30.208323       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:34:30.661888       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:35:00.214709       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:35:00.669673       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 04:35:09.686134       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-281471"
	I1004 04:35:16.766816       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="289.198µs"
	I1004 04:35:29.766680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="158.588µs"
	E1004 04:35:30.221125       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:35:30.676765       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:36:00.227673       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:36:00.684199       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:36:30.234280       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:36:30.690702       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:37:00.241789       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:37:00.698637       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:37:30.248836       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:37:30.706399       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 04:24:27.410456       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 04:24:27.422824       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.201"]
	E1004 04:24:27.423036       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 04:24:27.458963       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 04:24:27.459004       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 04:24:27.459035       1 server_linux.go:169] "Using iptables Proxier"
	I1004 04:24:27.461860       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 04:24:27.462358       1 server.go:483] "Version info" version="v1.31.1"
	I1004 04:24:27.462633       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:24:27.463878       1 config.go:199] "Starting service config controller"
	I1004 04:24:27.463938       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 04:24:27.463996       1 config.go:105] "Starting endpoint slice config controller"
	I1004 04:24:27.464019       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 04:24:27.465775       1 config.go:328] "Starting node config controller"
	I1004 04:24:27.466702       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 04:24:27.564086       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 04:24:27.564231       1 shared_informer.go:320] Caches are synced for service config
	I1004 04:24:27.567516       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] <==
	I1004 04:24:24.757582       1 serving.go:386] Generated self-signed cert in-memory
	W1004 04:24:26.684632       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 04:24:26.684783       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 04:24:26.684873       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 04:24:26.684901       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 04:24:26.728749       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1004 04:24:26.728855       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:24:26.731402       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 04:24:26.731498       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 04:24:26.732194       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1004 04:24:26.732287       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1004 04:24:26.837399       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 04:36:46 default-k8s-diff-port-281471 kubelet[915]: E1004 04:36:46.752244     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-f6qhr" podUID="46c2870a-41a6-46a1-bbbd-f38f2e266873"
	Oct 04 04:36:52 default-k8s-diff-port-281471 kubelet[915]: E1004 04:36:52.948132     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016612947877362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:36:52 default-k8s-diff-port-281471 kubelet[915]: E1004 04:36:52.948174     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016612947877362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:36:58 default-k8s-diff-port-281471 kubelet[915]: E1004 04:36:58.752896     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-f6qhr" podUID="46c2870a-41a6-46a1-bbbd-f38f2e266873"
	Oct 04 04:37:02 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:02.950214     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016622949494924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:02 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:02.950658     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016622949494924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:09 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:09.753308     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-f6qhr" podUID="46c2870a-41a6-46a1-bbbd-f38f2e266873"
	Oct 04 04:37:12 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:12.952027     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016632951487962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:12 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:12.952484     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016632951487962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:22 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:22.783365     915 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 04:37:22 default-k8s-diff-port-281471 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 04:37:22 default-k8s-diff-port-281471 kubelet[915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 04:37:22 default-k8s-diff-port-281471 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 04:37:22 default-k8s-diff-port-281471 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 04:37:22 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:22.953620     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016642953333931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:22 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:22.953645     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016642953333931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:23 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:23.753038     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-f6qhr" podUID="46c2870a-41a6-46a1-bbbd-f38f2e266873"
	Oct 04 04:37:32 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:32.955876     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016652955439534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:32 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:32.955928     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016652955439534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:35 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:35.752329     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-f6qhr" podUID="46c2870a-41a6-46a1-bbbd-f38f2e266873"
	Oct 04 04:37:42 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:42.957444     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016662956875082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:42 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:42.958049     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016662956875082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:50 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:50.752641     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-f6qhr" podUID="46c2870a-41a6-46a1-bbbd-f38f2e266873"
	Oct 04 04:37:52 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:52.959796     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016672959326731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:52 default-k8s-diff-port-281471 kubelet[915]: E1004 04:37:52.960069     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016672959326731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] <==
	I1004 04:24:27.307080       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1004 04:24:57.314392       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] <==
	I1004 04:24:58.102434       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 04:24:58.114499       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 04:24:58.114743       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 04:24:58.132784       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 04:24:58.133796       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-281471_4d9956fa-531e-46b7-9e36-b11659f8607e!
	I1004 04:24:58.133466       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f432bad1-b1f6-4130-b9f1-8e2b00dd53a4", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-281471_4d9956fa-531e-46b7-9e36-b11659f8607e became leader
	I1004 04:24:58.234597       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-281471_4d9956fa-531e-46b7-9e36-b11659f8607e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-281471 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-f6qhr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-281471 describe pod metrics-server-6867b74b74-f6qhr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-281471 describe pod metrics-server-6867b74b74-f6qhr: exit status 1 (63.801899ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-f6qhr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-281471 describe pod metrics-server-6867b74b74-f6qhr: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-658545 -n no-preload-658545
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-04 04:38:24.414727177 +0000 UTC m=+6623.347667736
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658545 -n no-preload-658545
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-658545 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-658545 logs -n 25: (2.024862417s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-934812            | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-617497             | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617497                  | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617497 --memory=2200 --alsologtostderr   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-617497 image list                           | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:18 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658545                  | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281471  | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-420062        | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-934812                 | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:19 UTC | 04 Oct 24 04:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-420062             | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281471       | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC | 04 Oct 24 04:28 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 04:21:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 04:21:23.276574   67541 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:21:23.276701   67541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:21:23.276710   67541 out.go:358] Setting ErrFile to fd 2...
	I1004 04:21:23.276715   67541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:21:23.276893   67541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:21:23.277439   67541 out.go:352] Setting JSON to false
	I1004 04:21:23.278387   67541 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7428,"bootTime":1728008255,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:21:23.278482   67541 start.go:139] virtualization: kvm guest
	I1004 04:21:23.280571   67541 out.go:177] * [default-k8s-diff-port-281471] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:21:23.282033   67541 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:21:23.282063   67541 notify.go:220] Checking for updates...
	I1004 04:21:23.284454   67541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:21:23.285843   67541 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:21:23.287026   67541 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:21:23.288328   67541 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:21:23.289544   67541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:21:23.291321   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:21:23.291979   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:21:23.292059   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:21:23.306995   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I1004 04:21:23.307440   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:21:23.308080   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:21:23.308106   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:21:23.308442   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:21:23.308642   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:21:23.308893   67541 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:21:23.309208   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:21:23.309280   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:21:23.323807   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I1004 04:21:23.324281   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:21:23.324777   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:21:23.324797   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:21:23.325085   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:21:23.325248   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:21:23.359916   67541 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 04:21:23.361482   67541 start.go:297] selected driver: kvm2
	I1004 04:21:23.361504   67541 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:21:23.361657   67541 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:21:23.362533   67541 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:21:23.362621   67541 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:21:23.378088   67541 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:21:23.378515   67541 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:21:23.378547   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:21:23.378591   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:21:23.378627   67541 start.go:340] cluster config:
	{Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:21:23.378727   67541 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:21:23.380705   67541 out.go:177] * Starting "default-k8s-diff-port-281471" primary control-plane node in "default-k8s-diff-port-281471" cluster
	I1004 04:21:20.068102   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:23.140106   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:23.381986   67541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:21:23.382036   67541 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 04:21:23.382048   67541 cache.go:56] Caching tarball of preloaded images
	I1004 04:21:23.382125   67541 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:21:23.382135   67541 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 04:21:23.382254   67541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/config.json ...
	I1004 04:21:23.382433   67541 start.go:360] acquireMachinesLock for default-k8s-diff-port-281471: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:21:29.220163   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:32.292105   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:38.372080   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:41.444091   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:47.524103   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:50.596091   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:56.676086   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:59.748055   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:05.828125   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:08.900042   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:14.980094   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:18.052114   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:24.132087   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:27.204139   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:33.284040   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:36.356076   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:42.436190   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:45.508075   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:51.588061   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:54.660042   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:00.740141   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:03.812099   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:09.892076   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:12.964133   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:15.968919   66755 start.go:364] duration metric: took 4m6.72532498s to acquireMachinesLock for "embed-certs-934812"
	I1004 04:23:15.968984   66755 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:15.968992   66755 fix.go:54] fixHost starting: 
	I1004 04:23:15.969309   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:15.969356   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:15.984739   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I1004 04:23:15.985214   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:15.985743   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:23:15.985769   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:15.986104   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:15.986289   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:15.986449   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:23:15.988237   66755 fix.go:112] recreateIfNeeded on embed-certs-934812: state=Stopped err=<nil>
	I1004 04:23:15.988263   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	W1004 04:23:15.988415   66755 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:15.990473   66755 out.go:177] * Restarting existing kvm2 VM for "embed-certs-934812" ...
	I1004 04:23:15.965929   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:15.965974   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:23:15.966321   66293 buildroot.go:166] provisioning hostname "no-preload-658545"
	I1004 04:23:15.966348   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:23:15.966530   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:23:15.968760   66293 machine.go:96] duration metric: took 4m37.423316886s to provisionDockerMachine
	I1004 04:23:15.968806   66293 fix.go:56] duration metric: took 4m37.446149084s for fixHost
	I1004 04:23:15.968814   66293 start.go:83] releasing machines lock for "no-preload-658545", held for 4m37.446179902s
	W1004 04:23:15.968836   66293 start.go:714] error starting host: provision: host is not running
	W1004 04:23:15.968935   66293 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1004 04:23:15.968946   66293 start.go:729] Will try again in 5 seconds ...
	I1004 04:23:15.991914   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Start
	I1004 04:23:15.992106   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring networks are active...
	I1004 04:23:15.992995   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring network default is active
	I1004 04:23:15.993392   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring network mk-embed-certs-934812 is active
	I1004 04:23:15.993728   66755 main.go:141] libmachine: (embed-certs-934812) Getting domain xml...
	I1004 04:23:15.994410   66755 main.go:141] libmachine: (embed-certs-934812) Creating domain...
	I1004 04:23:17.232262   66755 main.go:141] libmachine: (embed-certs-934812) Waiting to get IP...
	I1004 04:23:17.233339   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.233793   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.233879   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.233797   67957 retry.go:31] will retry after 221.075745ms: waiting for machine to come up
	I1004 04:23:17.456413   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.456917   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.456941   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.456869   67957 retry.go:31] will retry after 354.386237ms: waiting for machine to come up
	I1004 04:23:17.812523   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.812949   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.812973   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.812905   67957 retry.go:31] will retry after 338.999517ms: waiting for machine to come up
	I1004 04:23:18.153589   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:18.154029   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:18.154056   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:18.153987   67957 retry.go:31] will retry after 555.533205ms: waiting for machine to come up
	I1004 04:23:18.710680   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:18.711155   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:18.711181   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:18.711104   67957 retry.go:31] will retry after 733.812197ms: waiting for machine to come up
	I1004 04:23:20.970507   66293 start.go:360] acquireMachinesLock for no-preload-658545: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:23:19.447202   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:19.447644   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:19.447671   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:19.447600   67957 retry.go:31] will retry after 575.303848ms: waiting for machine to come up
	I1004 04:23:20.024465   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:20.024788   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:20.024819   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:20.024735   67957 retry.go:31] will retry after 894.593683ms: waiting for machine to come up
	I1004 04:23:20.920880   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:20.921499   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:20.921522   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:20.921480   67957 retry.go:31] will retry after 924.978895ms: waiting for machine to come up
	I1004 04:23:21.848064   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:21.848498   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:21.848619   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:21.848550   67957 retry.go:31] will retry after 1.554806984s: waiting for machine to come up
	I1004 04:23:23.404569   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:23.404936   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:23.404964   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:23.404884   67957 retry.go:31] will retry after 1.700496318s: waiting for machine to come up
	I1004 04:23:25.106988   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:25.107410   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:25.107441   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:25.107351   67957 retry.go:31] will retry after 1.913555474s: waiting for machine to come up
	I1004 04:23:27.022672   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:27.023134   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:27.023161   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:27.023096   67957 retry.go:31] will retry after 3.208946613s: waiting for machine to come up
	I1004 04:23:30.235462   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:30.235910   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:30.235942   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:30.235868   67957 retry.go:31] will retry after 3.125545279s: waiting for machine to come up
	I1004 04:23:33.364563   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.365007   66755 main.go:141] libmachine: (embed-certs-934812) Found IP for machine: 192.168.61.74
	I1004 04:23:33.365031   66755 main.go:141] libmachine: (embed-certs-934812) Reserving static IP address...
	I1004 04:23:33.365047   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has current primary IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.365595   66755 main.go:141] libmachine: (embed-certs-934812) Reserved static IP address: 192.168.61.74
	I1004 04:23:33.365628   66755 main.go:141] libmachine: (embed-certs-934812) Waiting for SSH to be available...
	I1004 04:23:33.365648   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "embed-certs-934812", mac: "52:54:00:25:fb:50", ip: "192.168.61.74"} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.365667   66755 main.go:141] libmachine: (embed-certs-934812) DBG | skip adding static IP to network mk-embed-certs-934812 - found existing host DHCP lease matching {name: "embed-certs-934812", mac: "52:54:00:25:fb:50", ip: "192.168.61.74"}
	I1004 04:23:33.365682   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Getting to WaitForSSH function...
	I1004 04:23:33.367835   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.368155   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.368185   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.368297   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Using SSH client type: external
	I1004 04:23:33.368322   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa (-rw-------)
	I1004 04:23:33.368359   66755 main.go:141] libmachine: (embed-certs-934812) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:23:33.368369   66755 main.go:141] libmachine: (embed-certs-934812) DBG | About to run SSH command:
	I1004 04:23:33.368377   66755 main.go:141] libmachine: (embed-certs-934812) DBG | exit 0
	I1004 04:23:33.496067   66755 main.go:141] libmachine: (embed-certs-934812) DBG | SSH cmd err, output: <nil>: 
	I1004 04:23:33.496559   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetConfigRaw
	I1004 04:23:33.497310   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:33.500858   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.501360   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.501403   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.501750   66755 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/config.json ...
	I1004 04:23:33.502058   66755 machine.go:93] provisionDockerMachine start ...
	I1004 04:23:33.502084   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:33.502303   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.505899   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.506442   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.506475   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.506686   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.506947   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.507165   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.507324   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.507541   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.507744   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.507757   66755 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:23:33.624518   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:23:33.624547   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.624795   66755 buildroot.go:166] provisioning hostname "embed-certs-934812"
	I1004 04:23:33.624826   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.625021   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.627597   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.627916   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.627948   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.628115   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.628312   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.628444   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.628608   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.628785   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.629023   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.629040   66755 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-934812 && echo "embed-certs-934812" | sudo tee /etc/hostname
	I1004 04:23:33.758642   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-934812
	
	I1004 04:23:33.758681   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.761325   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.761654   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.761696   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.761849   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.762034   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.762164   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.762297   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.762426   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.762636   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.762652   66755 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-934812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-934812/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-934812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:23:33.889571   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:33.889601   66755 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:23:33.889642   66755 buildroot.go:174] setting up certificates
	I1004 04:23:33.889654   66755 provision.go:84] configureAuth start
	I1004 04:23:33.889681   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.889992   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:33.892657   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.893063   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.893087   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.893310   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.895770   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.896126   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.896162   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.896328   66755 provision.go:143] copyHostCerts
	I1004 04:23:33.896397   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:23:33.896408   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:23:33.896472   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:23:33.896565   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:23:33.896573   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:23:33.896595   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:23:33.896652   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:23:33.896659   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:23:33.896678   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:23:33.896724   66755 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.embed-certs-934812 san=[127.0.0.1 192.168.61.74 embed-certs-934812 localhost minikube]
	I1004 04:23:33.997867   66755 provision.go:177] copyRemoteCerts
	I1004 04:23:33.997923   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:23:33.997950   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.001050   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.001422   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.001461   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.001733   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.001961   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.002125   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.002246   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.090823   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:23:34.116934   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1004 04:23:34.669084   67282 start.go:364] duration metric: took 2m46.052475725s to acquireMachinesLock for "old-k8s-version-420062"
	I1004 04:23:34.669158   67282 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:34.669168   67282 fix.go:54] fixHost starting: 
	I1004 04:23:34.669584   67282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:34.669640   67282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:34.686790   67282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I1004 04:23:34.687312   67282 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:34.687829   67282 main.go:141] libmachine: Using API Version  1
	I1004 04:23:34.687857   67282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:34.688238   67282 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:34.688415   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:34.688579   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetState
	I1004 04:23:34.690288   67282 fix.go:112] recreateIfNeeded on old-k8s-version-420062: state=Stopped err=<nil>
	I1004 04:23:34.690326   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	W1004 04:23:34.690467   67282 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:34.692283   67282 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-420062" ...
	I1004 04:23:34.143763   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 04:23:34.168897   66755 provision.go:87] duration metric: took 279.227966ms to configureAuth
	I1004 04:23:34.168929   66755 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:23:34.169096   66755 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:23:34.169168   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.171638   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.171952   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.171977   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.172178   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.172349   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.172503   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.172594   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.172717   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:34.172924   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:34.172943   66755 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:23:34.411661   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:23:34.411690   66755 machine.go:96] duration metric: took 909.61315ms to provisionDockerMachine
	I1004 04:23:34.411703   66755 start.go:293] postStartSetup for "embed-certs-934812" (driver="kvm2")
	I1004 04:23:34.411716   66755 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:23:34.411734   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.412070   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:23:34.412099   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.415246   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.415583   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.415643   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.415802   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.415997   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.416170   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.416322   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.507385   66755 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:23:34.511963   66755 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:23:34.511990   66755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:23:34.512064   66755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:23:34.512152   66755 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:23:34.512270   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:23:34.522375   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:34.547860   66755 start.go:296] duration metric: took 136.143527ms for postStartSetup
	I1004 04:23:34.547904   66755 fix.go:56] duration metric: took 18.578910472s for fixHost
	I1004 04:23:34.547931   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.550715   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.551031   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.551067   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.551194   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.551391   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.551568   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.551724   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.551903   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:34.552055   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:34.552064   66755 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:23:34.668944   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015814.641353752
	
	I1004 04:23:34.668966   66755 fix.go:216] guest clock: 1728015814.641353752
	I1004 04:23:34.668974   66755 fix.go:229] Guest: 2024-10-04 04:23:34.641353752 +0000 UTC Remote: 2024-10-04 04:23:34.547909289 +0000 UTC m=+265.449211021 (delta=93.444463ms)
	I1004 04:23:34.668993   66755 fix.go:200] guest clock delta is within tolerance: 93.444463ms
	I1004 04:23:34.668999   66755 start.go:83] releasing machines lock for "embed-certs-934812", held for 18.70003051s
	I1004 04:23:34.669024   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.669299   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:34.672346   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.672757   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.672796   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.672966   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673609   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673816   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673940   66755 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:23:34.673982   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.674020   66755 ssh_runner.go:195] Run: cat /version.json
	I1004 04:23:34.674043   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.676934   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677085   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677379   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.677406   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677449   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.677480   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677560   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.677677   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.677758   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.677811   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.677873   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.677928   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.677979   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.678022   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.761509   66755 ssh_runner.go:195] Run: systemctl --version
	I1004 04:23:34.784487   66755 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:23:34.934037   66755 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:23:34.942569   66755 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:23:34.942642   66755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:23:34.960164   66755 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:23:34.960197   66755 start.go:495] detecting cgroup driver to use...
	I1004 04:23:34.960276   66755 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:23:34.979195   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:23:34.994660   66755 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:23:34.994747   66755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:23:35.011209   66755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:23:35.031746   66755 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:23:35.146164   66755 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:23:35.287092   66755 docker.go:233] disabling docker service ...
	I1004 04:23:35.287167   66755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:23:35.308007   66755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:23:35.323235   66755 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:23:35.473583   66755 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:23:35.610098   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:23:35.624276   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:23:35.643810   66755 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:23:35.643873   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.655804   66755 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:23:35.655875   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.668260   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.679770   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.692649   66755 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:23:35.704364   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.715539   66755 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.739272   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.754538   66755 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:23:35.766476   66755 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:23:35.766566   66755 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:23:35.781677   66755 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:23:35.792640   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:35.910787   66755 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:23:36.015877   66755 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:23:36.015948   66755 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:23:36.021573   66755 start.go:563] Will wait 60s for crictl version
	I1004 04:23:36.021642   66755 ssh_runner.go:195] Run: which crictl
	I1004 04:23:36.025605   66755 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:23:36.064644   66755 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:23:36.064714   66755 ssh_runner.go:195] Run: crio --version
	I1004 04:23:36.094751   66755 ssh_runner.go:195] Run: crio --version
	I1004 04:23:36.127213   66755 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:23:34.693590   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .Start
	I1004 04:23:34.693792   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring networks are active...
	I1004 04:23:34.694582   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network default is active
	I1004 04:23:34.694917   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network mk-old-k8s-version-420062 is active
	I1004 04:23:34.695322   67282 main.go:141] libmachine: (old-k8s-version-420062) Getting domain xml...
	I1004 04:23:34.696052   67282 main.go:141] libmachine: (old-k8s-version-420062) Creating domain...
	I1004 04:23:35.995511   67282 main.go:141] libmachine: (old-k8s-version-420062) Waiting to get IP...
	I1004 04:23:35.996465   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:35.996962   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:35.997031   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:35.996923   68093 retry.go:31] will retry after 296.620059ms: waiting for machine to come up
	I1004 04:23:36.295737   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:36.296226   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:36.296257   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:36.296182   68093 retry.go:31] will retry after 311.736827ms: waiting for machine to come up
	I1004 04:23:36.610158   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:36.610804   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:36.610829   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:36.610759   68093 retry.go:31] will retry after 440.646496ms: waiting for machine to come up
	I1004 04:23:37.053487   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:37.053956   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:37.053981   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:37.053923   68093 retry.go:31] will retry after 550.190101ms: waiting for machine to come up
	I1004 04:23:37.605404   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:37.605775   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:37.605815   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:37.605743   68093 retry.go:31] will retry after 721.648529ms: waiting for machine to come up
	I1004 04:23:38.328819   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:38.329323   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:38.329362   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:38.329281   68093 retry.go:31] will retry after 825.234448ms: waiting for machine to come up
	I1004 04:23:36.128549   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:36.131439   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:36.131827   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:36.131856   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:36.132054   66755 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1004 04:23:36.136650   66755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:36.149563   66755 kubeadm.go:883] updating cluster {Name:embed-certs-934812 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:23:36.149691   66755 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:23:36.149738   66755 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:36.188235   66755 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:23:36.188316   66755 ssh_runner.go:195] Run: which lz4
	I1004 04:23:36.192619   66755 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:23:36.196876   66755 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:23:36.196909   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:23:37.711672   66755 crio.go:462] duration metric: took 1.519102092s to copy over tarball
	I1004 04:23:37.711752   66755 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:23:39.155736   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:39.156199   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:39.156229   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:39.156150   68093 retry.go:31] will retry after 970.793402ms: waiting for machine to come up
	I1004 04:23:40.128963   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:40.129454   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:40.129507   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:40.129419   68093 retry.go:31] will retry after 1.460395601s: waiting for machine to come up
	I1004 04:23:41.592145   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:41.592653   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:41.592677   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:41.592600   68093 retry.go:31] will retry after 1.397092356s: waiting for machine to come up
	I1004 04:23:42.992176   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:42.992670   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:42.992724   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:42.992663   68093 retry.go:31] will retry after 1.560294099s: waiting for machine to come up
	I1004 04:23:39.864408   66755 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.152629063s)
	I1004 04:23:39.864437   66755 crio.go:469] duration metric: took 2.152732931s to extract the tarball
	I1004 04:23:39.864446   66755 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:23:39.902496   66755 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:39.956348   66755 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:23:39.956373   66755 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:23:39.956381   66755 kubeadm.go:934] updating node { 192.168.61.74 8443 v1.31.1 crio true true} ...
	I1004 04:23:39.956509   66755 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-934812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:23:39.956572   66755 ssh_runner.go:195] Run: crio config
	I1004 04:23:40.014396   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:23:40.014423   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:23:40.014436   66755 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:23:40.014470   66755 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.74 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-934812 NodeName:embed-certs-934812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:23:40.014642   66755 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-934812"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:23:40.014728   66755 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:23:40.025328   66755 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:23:40.025441   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:23:40.035733   66755 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1004 04:23:40.057427   66755 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:23:40.078636   66755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1004 04:23:40.100583   66755 ssh_runner.go:195] Run: grep 192.168.61.74	control-plane.minikube.internal$ /etc/hosts
	I1004 04:23:40.104780   66755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:40.118484   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:40.245425   66755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:23:40.268739   66755 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812 for IP: 192.168.61.74
	I1004 04:23:40.268764   66755 certs.go:194] generating shared ca certs ...
	I1004 04:23:40.268792   66755 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:23:40.268962   66755 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:23:40.269022   66755 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:23:40.269035   66755 certs.go:256] generating profile certs ...
	I1004 04:23:40.269145   66755 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/client.key
	I1004 04:23:40.269226   66755 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.key.0181efa9
	I1004 04:23:40.269290   66755 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.key
	I1004 04:23:40.269436   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:23:40.269483   66755 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:23:40.269497   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:23:40.269535   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:23:40.269575   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:23:40.269607   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:23:40.269658   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:40.270269   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:23:40.316579   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:23:40.352928   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:23:40.383124   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:23:40.410211   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1004 04:23:40.442388   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:23:40.473580   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:23:40.501589   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:23:40.527299   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:23:40.551994   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:23:40.576644   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:23:40.601518   66755 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:23:40.620092   66755 ssh_runner.go:195] Run: openssl version
	I1004 04:23:40.626451   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:23:40.637754   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.642413   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.642472   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.648449   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:23:40.659371   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:23:40.670276   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.674793   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.674844   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.680550   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:23:40.691439   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:23:40.702237   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.706876   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.706937   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.712970   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:23:40.724505   66755 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:23:40.729486   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:23:40.735720   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:23:40.741680   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:23:40.747975   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:23:40.754056   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:23:40.760235   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:23:40.766463   66755 kubeadm.go:392] StartCluster: {Name:embed-certs-934812 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:23:40.766576   66755 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:23:40.766635   66755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:23:40.805927   66755 cri.go:89] found id: ""
	I1004 04:23:40.805995   66755 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:23:40.816693   66755 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:23:40.816717   66755 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:23:40.816770   66755 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:23:40.827024   66755 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:23:40.828056   66755 kubeconfig.go:125] found "embed-certs-934812" server: "https://192.168.61.74:8443"
	I1004 04:23:40.830076   66755 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:23:40.840637   66755 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.74
	I1004 04:23:40.840673   66755 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:23:40.840686   66755 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:23:40.840741   66755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:23:40.877659   66755 cri.go:89] found id: ""
	I1004 04:23:40.877737   66755 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:23:40.894712   66755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:23:40.904202   66755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:23:40.904224   66755 kubeadm.go:157] found existing configuration files:
	
	I1004 04:23:40.904290   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:23:40.913941   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:23:40.914003   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:23:40.924730   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:23:40.934706   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:23:40.934784   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:23:40.945008   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:23:40.954864   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:23:40.954949   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:23:40.965357   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:23:40.975380   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:23:40.975459   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:23:40.986157   66755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:23:41.001260   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:41.129150   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:41.839910   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.059079   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.132717   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.204227   66755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:23:42.204389   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:42.704572   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.205099   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.704555   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.720983   66755 api_server.go:72] duration metric: took 1.516755506s to wait for apiserver process to appear ...
	I1004 04:23:43.721020   66755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:23:43.721043   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.578729   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:23:46.578764   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:23:46.578780   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.611578   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:23:46.611609   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:23:46.721894   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.728611   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:46.728649   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:47.221889   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:47.229348   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:47.229382   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:47.721971   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:47.741433   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:47.741460   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:48.222154   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:48.226802   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 200:
	ok
	I1004 04:23:48.233611   66755 api_server.go:141] control plane version: v1.31.1
	I1004 04:23:48.233645   66755 api_server.go:131] duration metric: took 4.512616682s to wait for apiserver health ...
	I1004 04:23:48.233655   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:23:48.233662   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:23:48.235421   66755 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:23:44.555619   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:44.556128   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:44.556154   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:44.556061   68093 retry.go:31] will retry after 2.564674777s: waiting for machine to come up
	I1004 04:23:47.123819   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:47.124235   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:47.124263   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:47.124181   68093 retry.go:31] will retry after 2.408805702s: waiting for machine to come up
	I1004 04:23:48.236675   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:23:48.248304   66755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:23:48.273584   66755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:23:48.288132   66755 system_pods.go:59] 8 kube-system pods found
	I1004 04:23:48.288174   66755 system_pods.go:61] "coredns-7c65d6cfc9-z7pqn" [f206a8bf-5c18-49f2-9fae-a48a38d608a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:23:48.288208   66755 system_pods.go:61] "etcd-embed-certs-934812" [07a8f2db-6d47-469b-b0e4-749d1e106522] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:23:48.288218   66755 system_pods.go:61] "kube-apiserver-embed-certs-934812" [f36bc69a-a04e-40c2-8f78-a983ddbf28aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:23:48.288227   66755 system_pods.go:61] "kube-controller-manager-embed-certs-934812" [06d73118-fa31-4c98-b1e8-099611718b19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:23:48.288232   66755 system_pods.go:61] "kube-proxy-9qpgb" [6d833f16-4b8e-4409-99b6-214babe699c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 04:23:48.288238   66755 system_pods.go:61] "kube-scheduler-embed-certs-934812" [d076a245-49b6-4d8b-949a-2b559cd1d4d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:23:48.288243   66755 system_pods.go:61] "metrics-server-6867b74b74-d5b6b" [f4ec5d83-22a7-49e5-97e9-3519a29484fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:23:48.288250   66755 system_pods.go:61] "storage-provisioner" [2e76a95b-d6e2-4c1d-b954-3da8c2670a4a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 04:23:48.288259   66755 system_pods.go:74] duration metric: took 14.644463ms to wait for pod list to return data ...
	I1004 04:23:48.288265   66755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:23:48.293121   66755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:23:48.293153   66755 node_conditions.go:123] node cpu capacity is 2
	I1004 04:23:48.293166   66755 node_conditions.go:105] duration metric: took 4.895489ms to run NodePressure ...
	I1004 04:23:48.293184   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:48.633398   66755 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:23:48.639243   66755 kubeadm.go:739] kubelet initialised
	I1004 04:23:48.639282   66755 kubeadm.go:740] duration metric: took 5.842777ms waiting for restarted kubelet to initialise ...
	I1004 04:23:48.639293   66755 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:23:48.650460   66755 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:49.535979   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:49.536361   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:49.536388   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:49.536332   68093 retry.go:31] will retry after 4.242056709s: waiting for machine to come up
	I1004 04:23:50.657094   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:52.657717   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:55.089234   67541 start.go:364] duration metric: took 2m31.706739813s to acquireMachinesLock for "default-k8s-diff-port-281471"
	I1004 04:23:55.089300   67541 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:55.089311   67541 fix.go:54] fixHost starting: 
	I1004 04:23:55.089673   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:55.089718   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:55.110154   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I1004 04:23:55.110566   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:55.111001   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:23:55.111025   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:55.111417   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:55.111627   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:23:55.111794   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:23:55.113328   67541 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281471: state=Stopped err=<nil>
	I1004 04:23:55.113356   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	W1004 04:23:55.113537   67541 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:55.115190   67541 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-281471" ...
	I1004 04:23:53.783128   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.783631   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has current primary IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.783669   67282 main.go:141] libmachine: (old-k8s-version-420062) Found IP for machine: 192.168.50.146
	I1004 04:23:53.783684   67282 main.go:141] libmachine: (old-k8s-version-420062) Reserving static IP address...
	I1004 04:23:53.784173   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.784206   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | skip adding static IP to network mk-old-k8s-version-420062 - found existing host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"}
	I1004 04:23:53.784222   67282 main.go:141] libmachine: (old-k8s-version-420062) Reserved static IP address: 192.168.50.146
	I1004 04:23:53.784238   67282 main.go:141] libmachine: (old-k8s-version-420062) Waiting for SSH to be available...
	I1004 04:23:53.784250   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Getting to WaitForSSH function...
	I1004 04:23:53.786551   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.786985   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.787016   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.787207   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH client type: external
	I1004 04:23:53.787244   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa (-rw-------)
	I1004 04:23:53.787285   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:23:53.787301   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | About to run SSH command:
	I1004 04:23:53.787315   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | exit 0
	I1004 04:23:53.916121   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | SSH cmd err, output: <nil>: 
	I1004 04:23:53.916487   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetConfigRaw
	I1004 04:23:53.917200   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:53.919846   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.920295   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.920323   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.920641   67282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/config.json ...
	I1004 04:23:53.920902   67282 machine.go:93] provisionDockerMachine start ...
	I1004 04:23:53.920930   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:53.921137   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:53.923647   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.924000   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.924039   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.924198   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:53.924375   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:53.924508   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:53.924659   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:53.924796   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:53.925024   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:53.925036   67282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:23:54.044565   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:23:54.044595   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.044820   67282 buildroot.go:166] provisioning hostname "old-k8s-version-420062"
	I1004 04:23:54.044837   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.045006   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.047682   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.048032   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.048060   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.048186   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.048376   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.048525   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.048694   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.048853   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.049077   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.049098   67282 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-420062 && echo "old-k8s-version-420062" | sudo tee /etc/hostname
	I1004 04:23:54.183772   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-420062
	
	I1004 04:23:54.183835   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.186969   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.187333   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.187368   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.187754   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.188000   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.188177   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.188334   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.188559   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.188778   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.188803   67282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-420062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-420062/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-420062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:23:54.313827   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:54.313852   67282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:23:54.313896   67282 buildroot.go:174] setting up certificates
	I1004 04:23:54.313913   67282 provision.go:84] configureAuth start
	I1004 04:23:54.313925   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.314208   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:54.317028   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.317378   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.317408   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.317549   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.320292   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.320690   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.320718   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.320874   67282 provision.go:143] copyHostCerts
	I1004 04:23:54.320945   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:23:54.320957   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:23:54.321020   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:23:54.321144   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:23:54.321157   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:23:54.321184   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:23:54.321269   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:23:54.321279   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:23:54.321306   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:23:54.321378   67282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-420062 san=[127.0.0.1 192.168.50.146 localhost minikube old-k8s-version-420062]
	I1004 04:23:54.395370   67282 provision.go:177] copyRemoteCerts
	I1004 04:23:54.395422   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:23:54.395452   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.398647   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.399153   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.399194   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.399392   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.399582   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.399852   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.399991   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:54.491055   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:23:54.523206   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1004 04:23:54.549843   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:23:54.580403   67282 provision.go:87] duration metric: took 266.475364ms to configureAuth
	I1004 04:23:54.580438   67282 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:23:54.580645   67282 config.go:182] Loaded profile config "old-k8s-version-420062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1004 04:23:54.580736   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.583200   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.583489   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.583522   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.583672   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.583871   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.584066   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.584195   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.584402   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.584567   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.584582   67282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:23:54.835402   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:23:54.835436   67282 machine.go:96] duration metric: took 914.509404ms to provisionDockerMachine
	I1004 04:23:54.835451   67282 start.go:293] postStartSetup for "old-k8s-version-420062" (driver="kvm2")
	I1004 04:23:54.835466   67282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:23:54.835491   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:54.835870   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:23:54.835902   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.838257   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.838645   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.838670   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.838810   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.838972   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.839117   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.839247   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:54.927041   67282 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:23:54.931330   67282 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:23:54.931357   67282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:23:54.931424   67282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:23:54.931538   67282 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:23:54.931658   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:23:54.941402   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:54.967433   67282 start.go:296] duration metric: took 131.968424ms for postStartSetup
	I1004 04:23:54.967495   67282 fix.go:56] duration metric: took 20.29830643s for fixHost
	I1004 04:23:54.967523   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.970138   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.970485   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.970502   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.970802   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.971000   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.971164   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.971330   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.971560   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.971739   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.971751   67282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:23:55.089031   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015835.056238818
	
	I1004 04:23:55.089054   67282 fix.go:216] guest clock: 1728015835.056238818
	I1004 04:23:55.089063   67282 fix.go:229] Guest: 2024-10-04 04:23:55.056238818 +0000 UTC Remote: 2024-10-04 04:23:54.967501465 +0000 UTC m=+186.499621032 (delta=88.737353ms)
	I1004 04:23:55.089086   67282 fix.go:200] guest clock delta is within tolerance: 88.737353ms
	I1004 04:23:55.089093   67282 start.go:83] releasing machines lock for "old-k8s-version-420062", held for 20.419961099s
	I1004 04:23:55.089124   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.089472   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:55.092047   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.092519   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.092552   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.092784   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093364   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093566   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093670   67282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:23:55.093715   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:55.093808   67282 ssh_runner.go:195] Run: cat /version.json
	I1004 04:23:55.093834   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:55.096451   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.096829   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.096862   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.096881   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.097173   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:55.097364   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:55.097446   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.097474   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.097548   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:55.097685   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:55.097816   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:55.097823   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:55.097953   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:55.098106   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:55.207195   67282 ssh_runner.go:195] Run: systemctl --version
	I1004 04:23:55.214080   67282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:23:55.369882   67282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:23:55.376111   67282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:23:55.376171   67282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:23:55.393916   67282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:23:55.393945   67282 start.go:495] detecting cgroup driver to use...
	I1004 04:23:55.394015   67282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:23:55.411330   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:23:55.427665   67282 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:23:55.427734   67282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:23:55.445180   67282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:23:55.465131   67282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:23:55.596260   67282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:23:55.781647   67282 docker.go:233] disabling docker service ...
	I1004 04:23:55.781711   67282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:23:55.801252   67282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:23:55.817688   67282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:23:55.952563   67282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:23:56.081096   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:23:56.096194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:23:56.116859   67282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1004 04:23:56.116924   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.129060   67282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:23:56.129133   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.141246   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.158759   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.172580   67282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:23:56.192027   67282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:23:56.206698   67282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:23:56.206757   67282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:23:56.223074   67282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:23:56.241061   67282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:56.365616   67282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:23:56.474445   67282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:23:56.474519   67282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:23:56.480077   67282 start.go:563] Will wait 60s for crictl version
	I1004 04:23:56.480133   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:23:56.485207   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:23:56.537710   67282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:23:56.537802   67282 ssh_runner.go:195] Run: crio --version
	I1004 04:23:56.571679   67282 ssh_runner.go:195] Run: crio --version
	I1004 04:23:56.605639   67282 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1004 04:23:55.116525   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Start
	I1004 04:23:55.116723   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring networks are active...
	I1004 04:23:55.117665   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring network default is active
	I1004 04:23:55.118079   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring network mk-default-k8s-diff-port-281471 is active
	I1004 04:23:55.118565   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Getting domain xml...
	I1004 04:23:55.119417   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Creating domain...
	I1004 04:23:56.429715   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting to get IP...
	I1004 04:23:56.430752   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.431261   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.431353   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.431245   68239 retry.go:31] will retry after 200.843618ms: waiting for machine to come up
	I1004 04:23:56.633542   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.633974   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.634003   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.633923   68239 retry.go:31] will retry after 291.906374ms: waiting for machine to come up
	I1004 04:23:56.927325   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.927847   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.927880   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.927813   68239 retry.go:31] will retry after 374.509137ms: waiting for machine to come up
	I1004 04:23:57.304251   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.304713   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.304738   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:57.304671   68239 retry.go:31] will retry after 583.046975ms: waiting for machine to come up
	I1004 04:23:57.889410   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.889837   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.889868   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:57.889795   68239 retry.go:31] will retry after 549.483036ms: waiting for machine to come up
	I1004 04:23:56.606945   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:56.610421   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:56.610952   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:56.610976   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:56.611373   67282 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1004 04:23:56.615872   67282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:56.629783   67282 kubeadm.go:883] updating cluster {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:23:56.629932   67282 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 04:23:56.629983   67282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:56.690260   67282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:23:56.690343   67282 ssh_runner.go:195] Run: which lz4
	I1004 04:23:56.695808   67282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:23:56.701593   67282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:23:56.701623   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1004 04:23:54.156612   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"True"
	I1004 04:23:54.156637   66755 pod_ready.go:82] duration metric: took 5.506141622s for pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:54.156646   66755 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:56.164534   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:58.166994   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:58.440643   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:58.441080   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:58.441109   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:58.441034   68239 retry.go:31] will retry after 585.437747ms: waiting for machine to come up
	I1004 04:23:59.027951   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.028414   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.028441   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:59.028369   68239 retry.go:31] will retry after 773.32668ms: waiting for machine to come up
	I1004 04:23:59.803329   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.803752   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.803793   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:59.803722   68239 retry.go:31] will retry after 936.396482ms: waiting for machine to come up
	I1004 04:24:00.741805   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:00.742328   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:00.742372   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:00.742262   68239 retry.go:31] will retry after 1.294836266s: waiting for machine to come up
	I1004 04:24:02.038222   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:02.038750   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:02.038785   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:02.038699   68239 retry.go:31] will retry after 2.282660025s: waiting for machine to come up
	I1004 04:23:58.525796   67282 crio.go:462] duration metric: took 1.830039762s to copy over tarball
	I1004 04:23:58.525868   67282 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:24:01.514552   67282 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.98865618s)
	I1004 04:24:01.514585   67282 crio.go:469] duration metric: took 2.988759159s to extract the tarball
	I1004 04:24:01.514595   67282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:24:01.562130   67282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:01.598856   67282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:24:01.598882   67282 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:24:01.598960   67282 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:01.599035   67282 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.599047   67282 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.599048   67282 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1004 04:24:01.599020   67282 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.599025   67282 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.598967   67282 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.598967   67282 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.600760   67282 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.600772   67282 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1004 04:24:01.600767   67282 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:01.600791   67282 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.600802   67282 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.600804   67282 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.600807   67282 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.600840   67282 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.837527   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.877366   67282 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1004 04:24:01.877413   67282 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.877464   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:01.882328   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.914693   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.934055   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.941737   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.943929   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.944540   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.948337   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.970977   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.995537   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1004 04:24:02.127073   67282 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1004 04:24:02.127097   67282 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1004 04:24:02.127121   67282 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.127121   67282 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.127156   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.127159   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128471   67282 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1004 04:24:02.128532   67282 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.128535   67282 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1004 04:24:02.128560   67282 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.128571   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128595   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128598   67282 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1004 04:24:02.128627   67282 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.128669   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128730   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1004 04:24:02.128761   67282 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1004 04:24:02.128783   67282 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1004 04:24:02.128815   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.133675   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.133724   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.141911   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.141950   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.141989   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.142044   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.263733   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.263744   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.263798   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.265990   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.297523   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.297566   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.379282   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.379318   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.379331   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.417271   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.454521   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.454559   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.496644   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1004 04:24:02.533632   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1004 04:24:02.533690   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1004 04:24:02.533750   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1004 04:24:02.568138   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1004 04:24:02.568153   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1004 04:24:02.911933   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:03.055844   67282 cache_images.go:92] duration metric: took 1.456943316s to LoadCachedImages
	W1004 04:24:03.055959   67282 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1004 04:24:03.055976   67282 kubeadm.go:934] updating node { 192.168.50.146 8443 v1.20.0 crio true true} ...
	I1004 04:24:03.056087   67282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-420062 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:03.056162   67282 ssh_runner.go:195] Run: crio config
	I1004 04:24:03.103752   67282 cni.go:84] Creating CNI manager for ""
	I1004 04:24:03.103792   67282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:03.103805   67282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:03.103826   67282 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.146 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-420062 NodeName:old-k8s-version-420062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1004 04:24:03.103952   67282 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-420062"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:03.104008   67282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1004 04:24:03.114316   67282 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:03.114372   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:03.124059   67282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1004 04:24:03.143310   67282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:03.161143   67282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1004 04:24:03.178444   67282 ssh_runner.go:195] Run: grep 192.168.50.146	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:03.182235   67282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:03.195103   67282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:03.317820   67282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:03.334820   67282 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062 for IP: 192.168.50.146
	I1004 04:24:03.334840   67282 certs.go:194] generating shared ca certs ...
	I1004 04:24:03.334855   67282 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:03.335008   67282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:03.335049   67282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:03.335059   67282 certs.go:256] generating profile certs ...
	I1004 04:24:03.335156   67282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.key
	I1004 04:24:03.335212   67282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key.c1f9ed6b
	I1004 04:24:03.335260   67282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key
	I1004 04:24:03.335368   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:03.335394   67282 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:03.335401   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:03.335426   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:03.335451   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:03.335476   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:03.335518   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:03.336260   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:03.373985   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:03.408150   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:03.444219   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:03.493160   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1004 04:24:00.665171   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:02.815874   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:04.022715   66755 pod_ready.go:93] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.022744   66755 pod_ready.go:82] duration metric: took 9.866089641s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.022756   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.028094   66755 pod_ready.go:93] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.028115   66755 pod_ready.go:82] duration metric: took 5.350911ms for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.028123   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.033106   66755 pod_ready.go:93] pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.033124   66755 pod_ready.go:82] duration metric: took 4.995208ms for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.033132   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9qpgb" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.037388   66755 pod_ready.go:93] pod "kube-proxy-9qpgb" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.037409   66755 pod_ready.go:82] duration metric: took 4.270278ms for pod "kube-proxy-9qpgb" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.037420   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.042717   66755 pod_ready.go:93] pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.042737   66755 pod_ready.go:82] duration metric: took 5.30887ms for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.042747   66755 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.324259   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:04.324749   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:04.324811   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:04.324726   68239 retry.go:31] will retry after 2.070089599s: waiting for machine to come up
	I1004 04:24:06.396547   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:06.396991   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:06.397015   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:06.396944   68239 retry.go:31] will retry after 3.403718824s: waiting for machine to come up
	I1004 04:24:03.533084   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:24:03.565405   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:03.613938   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:24:03.642711   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:03.674784   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:03.706968   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:03.731329   67282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:03.749003   67282 ssh_runner.go:195] Run: openssl version
	I1004 04:24:03.755219   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:03.766499   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.771322   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.771413   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.778185   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:03.790581   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:03.802556   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.807312   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.807373   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.813595   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:03.825043   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:03.835389   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.840004   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.840051   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.847540   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:03.862303   67282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:03.868029   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:03.874811   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:03.880797   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:03.886622   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:03.892273   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:03.898129   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:03.905775   67282 kubeadm.go:392] StartCluster: {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:03.905852   67282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:03.905890   67282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:03.954627   67282 cri.go:89] found id: ""
	I1004 04:24:03.954702   67282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:03.965146   67282 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:03.965170   67282 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:03.965236   67282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:03.975404   67282 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:03.976362   67282 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-420062" does not appear in /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:24:03.976990   67282 kubeconfig.go:62] /home/jenkins/minikube-integration/19546-9647/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-420062" cluster setting kubeconfig missing "old-k8s-version-420062" context setting]
	I1004 04:24:03.977906   67282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:03.979485   67282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:03.989487   67282 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.146
	I1004 04:24:03.989517   67282 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:03.989529   67282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:03.989577   67282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:04.031536   67282 cri.go:89] found id: ""
	I1004 04:24:04.031607   67282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:04.048652   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:04.057813   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:04.057830   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:04.057867   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:24:04.066213   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:04.066252   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:04.074904   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:24:04.083485   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:04.083522   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:04.092314   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:24:04.100528   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:04.100572   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:04.109232   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:24:04.118051   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:04.118091   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:04.127430   67282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:04.137949   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:04.272627   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:04.940435   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.181288   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.268873   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.373549   67282 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:05.373653   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:05.874543   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.374154   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.874343   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:07.374744   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:07.874734   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:08.374255   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.050700   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:08.548473   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:09.802504   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:09.802912   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:09.802937   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:09.802870   68239 retry.go:31] will retry after 3.430575602s: waiting for machine to come up
	I1004 04:24:13.236792   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.237230   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Found IP for machine: 192.168.39.201
	I1004 04:24:13.237251   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Reserving static IP address...
	I1004 04:24:13.237268   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has current primary IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.237712   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-281471", mac: "52:54:00:cd:36:92", ip: "192.168.39.201"} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.237745   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Reserved static IP address: 192.168.39.201
	I1004 04:24:13.237765   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | skip adding static IP to network mk-default-k8s-diff-port-281471 - found existing host DHCP lease matching {name: "default-k8s-diff-port-281471", mac: "52:54:00:cd:36:92", ip: "192.168.39.201"}
	I1004 04:24:13.237786   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Getting to WaitForSSH function...
	I1004 04:24:13.237805   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for SSH to be available...
	I1004 04:24:13.240068   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.240354   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.240384   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.240514   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Using SSH client type: external
	I1004 04:24:13.240540   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa (-rw-------)
	I1004 04:24:13.240577   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:24:13.240594   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | About to run SSH command:
	I1004 04:24:13.240608   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | exit 0
	I1004 04:24:08.874627   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:09.374627   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:09.874278   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.374675   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.873949   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:11.373966   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:11.873775   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:12.373874   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:12.874010   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:13.374575   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.550171   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:13.049596   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:14.741098   66293 start.go:364] duration metric: took 53.770546651s to acquireMachinesLock for "no-preload-658545"
	I1004 04:24:14.741156   66293 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:24:14.741164   66293 fix.go:54] fixHost starting: 
	I1004 04:24:14.741565   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:14.741595   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:14.758364   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37947
	I1004 04:24:14.758823   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:14.759356   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:24:14.759383   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:14.759700   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:14.759895   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:14.760077   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:24:14.761849   66293 fix.go:112] recreateIfNeeded on no-preload-658545: state=Stopped err=<nil>
	I1004 04:24:14.761873   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	W1004 04:24:14.762037   66293 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:24:14.764123   66293 out.go:177] * Restarting existing kvm2 VM for "no-preload-658545" ...
	I1004 04:24:13.371830   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | SSH cmd err, output: <nil>: 
	I1004 04:24:13.372219   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetConfigRaw
	I1004 04:24:13.372817   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:13.375676   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.376080   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.376116   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.376393   67541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/config.json ...
	I1004 04:24:13.376616   67541 machine.go:93] provisionDockerMachine start ...
	I1004 04:24:13.376638   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:13.376845   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.379413   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.379847   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.379908   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.380015   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.380204   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.380360   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.380493   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.380657   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.380913   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.380988   67541 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:24:13.492488   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:24:13.492528   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.492749   67541 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281471"
	I1004 04:24:13.492768   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.492928   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.495691   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.496003   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.496031   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.496160   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.496368   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.496530   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.496651   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.496785   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.497017   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.497034   67541 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-281471 && echo "default-k8s-diff-port-281471" | sudo tee /etc/hostname
	I1004 04:24:13.627336   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-281471
	
	I1004 04:24:13.627364   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.630757   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.631162   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.631199   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.631486   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.631701   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.631874   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.632018   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.632216   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.632431   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.632457   67541 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-281471' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-281471/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-281471' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:24:13.758386   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:24:13.758413   67541 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:24:13.758462   67541 buildroot.go:174] setting up certificates
	I1004 04:24:13.758472   67541 provision.go:84] configureAuth start
	I1004 04:24:13.758484   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.758740   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:13.761590   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.761899   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.761939   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.762068   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.764293   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.764644   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.764672   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.764811   67541 provision.go:143] copyHostCerts
	I1004 04:24:13.764869   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:24:13.764880   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:24:13.764936   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:24:13.765046   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:24:13.765055   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:24:13.765075   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:24:13.765127   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:24:13.765135   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:24:13.765160   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:24:13.765235   67541 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-281471 san=[127.0.0.1 192.168.39.201 default-k8s-diff-port-281471 localhost minikube]
	I1004 04:24:14.075640   67541 provision.go:177] copyRemoteCerts
	I1004 04:24:14.075698   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:24:14.075722   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.078293   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.078658   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.078689   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.078827   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.079048   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.079213   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.079348   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.167232   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:24:14.193065   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1004 04:24:14.218112   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:24:14.243281   67541 provision.go:87] duration metric: took 484.783764ms to configureAuth
	I1004 04:24:14.243310   67541 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:24:14.243506   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:14.243593   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.246497   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.246837   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.246885   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.247019   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.247211   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.247384   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.247551   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.247719   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:14.247909   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:14.247923   67541 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:24:14.487651   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:24:14.487675   67541 machine.go:96] duration metric: took 1.11104473s to provisionDockerMachine
	I1004 04:24:14.487686   67541 start.go:293] postStartSetup for "default-k8s-diff-port-281471" (driver="kvm2")
	I1004 04:24:14.487696   67541 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:24:14.487733   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.488084   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:24:14.488114   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.490844   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.491198   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.491229   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.491372   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.491562   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.491700   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.491815   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.579398   67541 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:24:14.584068   67541 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:24:14.584098   67541 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:24:14.584179   67541 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:24:14.584274   67541 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:24:14.584379   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:24:14.594853   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:14.621833   67541 start.go:296] duration metric: took 134.135256ms for postStartSetup
	I1004 04:24:14.621874   67541 fix.go:56] duration metric: took 19.532563115s for fixHost
	I1004 04:24:14.621895   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.625077   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.625418   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.625443   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.625678   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.625900   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.626059   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.626205   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.626373   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:14.626589   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:14.626603   67541 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:24:14.740932   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015854.697826512
	
	I1004 04:24:14.740950   67541 fix.go:216] guest clock: 1728015854.697826512
	I1004 04:24:14.740957   67541 fix.go:229] Guest: 2024-10-04 04:24:14.697826512 +0000 UTC Remote: 2024-10-04 04:24:14.621877739 +0000 UTC m=+171.379203860 (delta=75.948773ms)
	I1004 04:24:14.741000   67541 fix.go:200] guest clock delta is within tolerance: 75.948773ms
	I1004 04:24:14.741007   67541 start.go:83] releasing machines lock for "default-k8s-diff-port-281471", held for 19.651737082s
	I1004 04:24:14.741031   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.741291   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:14.744142   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.744498   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.744518   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.744720   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745368   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745559   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745665   67541 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:24:14.745706   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.745802   67541 ssh_runner.go:195] Run: cat /version.json
	I1004 04:24:14.745843   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.748443   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748779   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.748813   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748838   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748927   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.749064   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.749245   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.749267   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.749283   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.749441   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.749481   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.749589   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.749725   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.749856   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.833632   67541 ssh_runner.go:195] Run: systemctl --version
	I1004 04:24:14.863812   67541 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:24:15.016823   67541 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:24:15.023613   67541 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:24:15.023696   67541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:24:15.042546   67541 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:24:15.042576   67541 start.go:495] detecting cgroup driver to use...
	I1004 04:24:15.042645   67541 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:24:15.060267   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:24:15.076088   67541 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:24:15.076155   67541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:24:15.091741   67541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:24:15.107153   67541 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:24:15.230591   67541 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:24:15.381704   67541 docker.go:233] disabling docker service ...
	I1004 04:24:15.381776   67541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:24:15.397616   67541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:24:15.412350   67541 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:24:15.569525   67541 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:24:15.690120   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:24:15.705348   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:24:15.728253   67541 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:24:15.728334   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.739875   67541 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:24:15.739951   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.751997   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.765898   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.777917   67541 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:24:15.791235   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.802390   67541 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.825385   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.837278   67541 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:24:15.848791   67541 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:24:15.848864   67541 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:24:15.870774   67541 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:24:15.883544   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:15.997406   67541 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:24:16.095391   67541 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:24:16.095508   67541 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:24:16.102427   67541 start.go:563] Will wait 60s for crictl version
	I1004 04:24:16.102510   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:24:16.106958   67541 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:24:16.150721   67541 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:24:16.150824   67541 ssh_runner.go:195] Run: crio --version
	I1004 04:24:16.181714   67541 ssh_runner.go:195] Run: crio --version
	I1004 04:24:16.214202   67541 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:24:16.215583   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:16.218418   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:16.218800   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:16.218831   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:16.219002   67541 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 04:24:16.223382   67541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:16.236443   67541 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:24:16.236565   67541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:24:16.236652   67541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:16.279095   67541 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:24:16.279158   67541 ssh_runner.go:195] Run: which lz4
	I1004 04:24:16.283684   67541 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:24:16.288436   67541 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:24:16.288472   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:24:17.853549   67541 crio.go:462] duration metric: took 1.569889689s to copy over tarball
	I1004 04:24:17.853631   67541 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:24:14.765651   66293 main.go:141] libmachine: (no-preload-658545) Calling .Start
	I1004 04:24:14.765886   66293 main.go:141] libmachine: (no-preload-658545) Ensuring networks are active...
	I1004 04:24:14.766761   66293 main.go:141] libmachine: (no-preload-658545) Ensuring network default is active
	I1004 04:24:14.767179   66293 main.go:141] libmachine: (no-preload-658545) Ensuring network mk-no-preload-658545 is active
	I1004 04:24:14.767706   66293 main.go:141] libmachine: (no-preload-658545) Getting domain xml...
	I1004 04:24:14.768478   66293 main.go:141] libmachine: (no-preload-658545) Creating domain...
	I1004 04:24:16.087556   66293 main.go:141] libmachine: (no-preload-658545) Waiting to get IP...
	I1004 04:24:16.088628   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.089032   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.089093   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.089008   68422 retry.go:31] will retry after 276.442313ms: waiting for machine to come up
	I1004 04:24:16.367448   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.367923   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.367953   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.367894   68422 retry.go:31] will retry after 291.504157ms: waiting for machine to come up
	I1004 04:24:16.661396   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.661958   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.662009   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.661932   68422 retry.go:31] will retry after 378.34293ms: waiting for machine to come up
	I1004 04:24:17.041431   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:17.041942   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:17.041970   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:17.041916   68422 retry.go:31] will retry after 553.613866ms: waiting for machine to come up
	I1004 04:24:17.596745   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:17.597294   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:17.597327   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:17.597259   68422 retry.go:31] will retry after 611.098402ms: waiting for machine to come up
	I1004 04:24:18.210083   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:18.210569   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:18.210592   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:18.210530   68422 retry.go:31] will retry after 691.8822ms: waiting for machine to come up
	I1004 04:24:13.873857   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:14.374241   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:14.873863   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.374063   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.873950   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:16.373819   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:16.874290   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:17.374357   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:17.874163   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:18.374160   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.049926   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:17.051060   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:20.132987   67541 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.279324141s)
	I1004 04:24:20.133023   67541 crio.go:469] duration metric: took 2.279442603s to extract the tarball
	I1004 04:24:20.133033   67541 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:24:20.171805   67541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:20.217431   67541 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:24:20.217458   67541 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:24:20.217468   67541 kubeadm.go:934] updating node { 192.168.39.201 8444 v1.31.1 crio true true} ...
	I1004 04:24:20.217586   67541 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-281471 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:20.217687   67541 ssh_runner.go:195] Run: crio config
	I1004 04:24:20.269529   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:24:20.269559   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:20.269569   67541 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:20.269604   67541 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.201 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-281471 NodeName:default-k8s-diff-port-281471 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:24:20.269822   67541 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.201
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-281471"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:20.269913   67541 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:24:20.281286   67541 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:20.281368   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:20.292186   67541 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1004 04:24:20.310972   67541 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:20.329420   67541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1004 04:24:20.348358   67541 ssh_runner.go:195] Run: grep 192.168.39.201	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:20.352641   67541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:20.366317   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:20.499648   67541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:20.518930   67541 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471 for IP: 192.168.39.201
	I1004 04:24:20.518954   67541 certs.go:194] generating shared ca certs ...
	I1004 04:24:20.518971   67541 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:20.519121   67541 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:20.519167   67541 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:20.519177   67541 certs.go:256] generating profile certs ...
	I1004 04:24:20.519279   67541 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/client.key
	I1004 04:24:20.519347   67541 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.key.6cd63ef9
	I1004 04:24:20.519381   67541 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.key
	I1004 04:24:20.519492   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:20.519527   67541 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:20.519539   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:20.519570   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:20.519614   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:20.519643   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:20.519710   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:20.520418   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:20.566110   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:20.613646   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:20.648416   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:20.678840   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1004 04:24:20.722021   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:24:20.749381   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:20.776777   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 04:24:20.803998   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:20.833182   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:20.859600   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:20.887732   67541 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:20.910566   67541 ssh_runner.go:195] Run: openssl version
	I1004 04:24:20.917151   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:20.930475   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.935819   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.935895   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.942607   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:20.954950   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:20.967348   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.972468   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.972543   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.979061   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:20.992010   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:21.008370   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.015101   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.015161   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.023491   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:21.035766   67541 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:21.041416   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:21.048405   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:21.055468   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:21.062228   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:21.068967   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:21.075984   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:21.086088   67541 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:21.086196   67541 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:21.086253   67541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:21.131997   67541 cri.go:89] found id: ""
	I1004 04:24:21.132061   67541 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:21.145219   67541 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:21.145237   67541 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:21.145289   67541 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:21.157041   67541 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:21.158724   67541 kubeconfig.go:125] found "default-k8s-diff-port-281471" server: "https://192.168.39.201:8444"
	I1004 04:24:21.162295   67541 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:21.173771   67541 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.201
	I1004 04:24:21.173806   67541 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:21.173820   67541 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:21.173891   67541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:21.215149   67541 cri.go:89] found id: ""
	I1004 04:24:21.215216   67541 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:21.234432   67541 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:21.245688   67541 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:21.245707   67541 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:21.245758   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1004 04:24:21.256101   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:21.256168   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:21.267319   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1004 04:24:21.279995   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:21.280050   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:21.292588   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1004 04:24:21.304478   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:21.304545   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:21.317012   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1004 04:24:21.328769   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:21.328853   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:21.341597   67541 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:21.353901   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:21.483705   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.340208   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.582628   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.662202   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.773206   67541 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:22.773327   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.274151   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:18.903981   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:18.904373   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:18.904398   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:18.904331   68422 retry.go:31] will retry after 1.022635653s: waiting for machine to come up
	I1004 04:24:19.929163   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:19.929707   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:19.929749   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:19.929656   68422 retry.go:31] will retry after 939.130061ms: waiting for machine to come up
	I1004 04:24:20.870067   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:20.870578   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:20.870606   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:20.870521   68422 retry.go:31] will retry after 1.673919202s: waiting for machine to come up
	I1004 04:24:22.546229   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:22.546621   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:22.546650   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:22.546569   68422 retry.go:31] will retry after 1.962556159s: waiting for machine to come up
	I1004 04:24:18.874214   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.374670   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.874355   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:20.374335   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:20.874299   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:21.374492   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:21.874293   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:22.373890   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:22.874622   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.374639   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.552128   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:22.050844   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:24.051071   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:23.774477   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.807536   67541 api_server.go:72] duration metric: took 1.034328656s to wait for apiserver process to appear ...
	I1004 04:24:23.807569   67541 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:24:23.807593   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.646266   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:26.646299   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:26.646319   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.696828   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:26.696856   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:26.808107   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.819887   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:26.819947   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:27.308535   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:27.317320   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:27.317372   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:27.807868   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:27.817762   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:27.817805   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:28.307660   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:28.313515   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 200:
	ok
	I1004 04:24:28.320539   67541 api_server.go:141] control plane version: v1.31.1
	I1004 04:24:28.320568   67541 api_server.go:131] duration metric: took 4.512991081s to wait for apiserver health ...
	I1004 04:24:28.320578   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:24:28.320586   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:28.322138   67541 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:24:24.511356   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:24.511886   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:24.511917   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:24.511843   68422 retry.go:31] will retry after 2.5950382s: waiting for machine to come up
	I1004 04:24:27.109018   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:27.109474   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:27.109503   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:27.109451   68422 retry.go:31] will retry after 2.984182925s: waiting for machine to come up
	I1004 04:24:23.873822   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:24.373911   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:24.874756   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:25.374035   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:25.873874   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.374503   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.874371   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:27.374335   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:27.873941   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:28.373861   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.550974   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:28.552007   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:28.323513   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:24:28.336556   67541 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:24:28.358371   67541 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:24:28.373163   67541 system_pods.go:59] 8 kube-system pods found
	I1004 04:24:28.373204   67541 system_pods.go:61] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:24:28.373217   67541 system_pods.go:61] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:24:28.373228   67541 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:24:28.373239   67541 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:24:28.373246   67541 system_pods.go:61] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:24:28.373256   67541 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:24:28.373267   67541 system_pods.go:61] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:24:28.373273   67541 system_pods.go:61] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:24:28.373283   67541 system_pods.go:74] duration metric: took 14.891267ms to wait for pod list to return data ...
	I1004 04:24:28.373294   67541 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:24:28.378226   67541 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:24:28.378269   67541 node_conditions.go:123] node cpu capacity is 2
	I1004 04:24:28.378285   67541 node_conditions.go:105] duration metric: took 4.985167ms to run NodePressure ...
	I1004 04:24:28.378309   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:28.649369   67541 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:24:28.654563   67541 kubeadm.go:739] kubelet initialised
	I1004 04:24:28.654584   67541 kubeadm.go:740] duration metric: took 5.188927ms waiting for restarted kubelet to initialise ...
	I1004 04:24:28.654591   67541 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:28.662152   67541 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.668248   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.668278   67541 pod_ready.go:82] duration metric: took 6.099746ms for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.668287   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.668294   67541 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.675790   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.675811   67541 pod_ready.go:82] duration metric: took 7.509617ms for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.675823   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.675830   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.683763   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.683811   67541 pod_ready.go:82] duration metric: took 7.972006ms for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.683830   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.683839   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.761974   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.762006   67541 pod_ready.go:82] duration metric: took 78.154275ms for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.762021   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.762030   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.162590   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-proxy-4nnld" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.162623   67541 pod_ready.go:82] duration metric: took 400.583388ms for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.162634   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-proxy-4nnld" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.162643   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.562557   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.562584   67541 pod_ready.go:82] duration metric: took 399.929497ms for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.562595   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.562602   67541 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.963502   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.963528   67541 pod_ready.go:82] duration metric: took 400.919452ms for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.963539   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.963547   67541 pod_ready.go:39] duration metric: took 1.308947485s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:29.963561   67541 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:24:29.976241   67541 ops.go:34] apiserver oom_adj: -16
	I1004 04:24:29.976268   67541 kubeadm.go:597] duration metric: took 8.831025549s to restartPrimaryControlPlane
	I1004 04:24:29.976278   67541 kubeadm.go:394] duration metric: took 8.890203906s to StartCluster
	I1004 04:24:29.976295   67541 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:29.976372   67541 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:24:29.977898   67541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:29.978168   67541 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:24:29.978222   67541 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:24:29.978306   67541 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978330   67541 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-281471"
	W1004 04:24:29.978341   67541 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:24:29.978329   67541 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978353   67541 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978369   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:29.978367   67541 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-281471"
	I1004 04:24:29.978377   67541 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-281471"
	W1004 04:24:29.978387   67541 addons.go:243] addon metrics-server should already be in state true
	I1004 04:24:29.978413   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:29.978464   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:29.978731   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978783   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.978818   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978871   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.978839   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978970   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.979903   67541 out.go:177] * Verifying Kubernetes components...
	I1004 04:24:29.981432   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:29.994332   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42631
	I1004 04:24:29.994917   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:29.995488   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:29.995503   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:29.995865   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:29.996675   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:29.999180   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I1004 04:24:29.999220   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I1004 04:24:29.999564   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:29.999651   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.000157   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.000182   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.000262   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.000281   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.000379   67541 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-281471"
	W1004 04:24:30.000398   67541 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:24:30.000429   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:30.000613   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.000646   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.000790   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.000812   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.001163   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.001215   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.001259   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.001307   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.016576   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I1004 04:24:30.016650   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41997
	I1004 04:24:30.016796   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44507
	I1004 04:24:30.016993   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017079   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017138   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017536   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017557   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017548   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017584   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017537   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017621   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017929   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.017931   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.017970   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.018100   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.018152   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.018559   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.018600   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.020021   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.020637   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.022016   67541 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:30.022018   67541 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:24:30.023395   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:24:30.023417   67541 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:24:30.023444   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.023489   67541 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:24:30.023506   67541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:24:30.023528   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.027678   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028005   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028129   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.028180   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028538   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.028552   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.028560   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028724   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.028750   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.028881   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.028911   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.029013   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.029055   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.029124   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.037309   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46395
	I1004 04:24:30.037846   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.038328   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.038355   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.038683   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.038850   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.040366   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.040572   67541 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:24:30.040586   67541 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:24:30.040602   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.043618   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.044070   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.044092   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.044232   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.044413   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.044541   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.044687   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.194435   67541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:30.223577   67541 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-281471" to be "Ready" ...
	I1004 04:24:30.277458   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:24:30.316201   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:24:30.316227   67541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:24:30.333635   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:24:30.346511   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:24:30.346549   67541 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:24:30.405197   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:24:30.405219   67541 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:24:30.465174   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:24:31.307064   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307137   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307430   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.307442   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.307469   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.307546   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307574   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307691   67541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030198983s)
	I1004 04:24:31.307733   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307747   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307789   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.307811   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.309264   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.309275   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.309281   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.309291   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.309299   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.309538   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.309568   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.309583   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.315635   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.315653   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.315917   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.315933   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.411630   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.411658   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.411934   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.411951   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.411965   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.411983   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.411997   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.412221   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.412261   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.412274   67541 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-281471"
	I1004 04:24:31.412283   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.414267   67541 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1004 04:24:31.415607   67541 addons.go:510] duration metric: took 1.43738386s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1004 04:24:32.227563   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:30.095611   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:30.096032   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:30.096061   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:30.095981   68422 retry.go:31] will retry after 2.833386023s: waiting for machine to come up
	I1004 04:24:32.933027   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.933509   66293 main.go:141] libmachine: (no-preload-658545) Found IP for machine: 192.168.72.54
	I1004 04:24:32.933538   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has current primary IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.933544   66293 main.go:141] libmachine: (no-preload-658545) Reserving static IP address...
	I1004 04:24:32.933950   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "no-preload-658545", mac: "52:54:00:f5:6c:11", ip: "192.168.72.54"} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:32.933970   66293 main.go:141] libmachine: (no-preload-658545) Reserved static IP address: 192.168.72.54
	I1004 04:24:32.933988   66293 main.go:141] libmachine: (no-preload-658545) DBG | skip adding static IP to network mk-no-preload-658545 - found existing host DHCP lease matching {name: "no-preload-658545", mac: "52:54:00:f5:6c:11", ip: "192.168.72.54"}
	I1004 04:24:32.934002   66293 main.go:141] libmachine: (no-preload-658545) DBG | Getting to WaitForSSH function...
	I1004 04:24:32.934016   66293 main.go:141] libmachine: (no-preload-658545) Waiting for SSH to be available...
	I1004 04:24:32.936089   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.936440   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:32.936471   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.936572   66293 main.go:141] libmachine: (no-preload-658545) DBG | Using SSH client type: external
	I1004 04:24:32.936599   66293 main.go:141] libmachine: (no-preload-658545) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa (-rw-------)
	I1004 04:24:32.936637   66293 main.go:141] libmachine: (no-preload-658545) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:24:32.936650   66293 main.go:141] libmachine: (no-preload-658545) DBG | About to run SSH command:
	I1004 04:24:32.936661   66293 main.go:141] libmachine: (no-preload-658545) DBG | exit 0
	I1004 04:24:33.064432   66293 main.go:141] libmachine: (no-preload-658545) DBG | SSH cmd err, output: <nil>: 
	I1004 04:24:33.064791   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetConfigRaw
	I1004 04:24:33.065494   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:33.068038   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.068302   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.068325   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.068580   66293 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/config.json ...
	I1004 04:24:33.068837   66293 machine.go:93] provisionDockerMachine start ...
	I1004 04:24:33.068858   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:33.069072   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.071425   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.071748   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.071819   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.071946   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.072166   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.072332   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.072429   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.072587   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.072799   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.072814   66293 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:24:33.184623   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:24:33.184656   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.184912   66293 buildroot.go:166] provisioning hostname "no-preload-658545"
	I1004 04:24:33.184946   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.185126   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.188804   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.189189   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.189222   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.189419   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.189664   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.189839   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.190002   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.190128   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.190300   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.190313   66293 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-658545 && echo "no-preload-658545" | sudo tee /etc/hostname
	I1004 04:24:33.316349   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-658545
	
	I1004 04:24:33.316381   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.319460   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.319908   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.319945   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.320110   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.320301   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.320475   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.320628   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.320811   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.321031   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.321058   66293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-658545' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-658545/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-658545' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:24:28.874265   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:29.374364   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:29.874581   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:30.373909   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:30.874089   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.374708   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.874696   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:32.374061   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:32.874233   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:33.374290   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.050105   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:33.549870   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:33.444185   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:24:33.444221   66293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:24:33.444246   66293 buildroot.go:174] setting up certificates
	I1004 04:24:33.444257   66293 provision.go:84] configureAuth start
	I1004 04:24:33.444273   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.444569   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:33.447726   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.448137   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.448168   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.448332   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.450903   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.451311   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.451340   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.451479   66293 provision.go:143] copyHostCerts
	I1004 04:24:33.451559   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:24:33.451571   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:24:33.451638   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:24:33.451748   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:24:33.451763   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:24:33.451818   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:24:33.451897   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:24:33.451906   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:24:33.451931   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:24:33.451992   66293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.no-preload-658545 san=[127.0.0.1 192.168.72.54 localhost minikube no-preload-658545]
	I1004 04:24:33.577106   66293 provision.go:177] copyRemoteCerts
	I1004 04:24:33.577160   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:24:33.577183   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.579990   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.580330   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.580359   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.580496   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.580672   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.580810   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.580937   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:33.671123   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:24:33.697805   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1004 04:24:33.725408   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:24:33.751285   66293 provision.go:87] duration metric: took 307.010531ms to configureAuth
	I1004 04:24:33.751315   66293 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:24:33.751553   66293 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:33.751651   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.754476   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.754896   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.754938   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.755087   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.755282   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.755450   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.755592   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.755723   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.755969   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.755987   66293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:24:33.996596   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:24:33.996625   66293 machine.go:96] duration metric: took 927.772762ms to provisionDockerMachine
	I1004 04:24:33.996636   66293 start.go:293] postStartSetup for "no-preload-658545" (driver="kvm2")
	I1004 04:24:33.996645   66293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:24:33.996662   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:33.996958   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:24:33.996981   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.999632   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.000082   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.000111   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.000324   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.000537   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.000733   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.000924   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.089338   66293 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:24:34.094278   66293 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:24:34.094303   66293 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:24:34.094377   66293 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:24:34.094468   66293 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:24:34.094597   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:24:34.105335   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:34.134191   66293 start.go:296] duration metric: took 137.541908ms for postStartSetup
	I1004 04:24:34.134243   66293 fix.go:56] duration metric: took 19.393079344s for fixHost
	I1004 04:24:34.134269   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.137227   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.137599   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.137638   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.137779   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.137978   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.138156   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.138289   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.138459   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:34.138652   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:34.138663   66293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:24:34.250671   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015874.218795126
	
	I1004 04:24:34.250699   66293 fix.go:216] guest clock: 1728015874.218795126
	I1004 04:24:34.250709   66293 fix.go:229] Guest: 2024-10-04 04:24:34.218795126 +0000 UTC Remote: 2024-10-04 04:24:34.134249208 +0000 UTC m=+355.755571497 (delta=84.545918ms)
	I1004 04:24:34.250735   66293 fix.go:200] guest clock delta is within tolerance: 84.545918ms
	I1004 04:24:34.250742   66293 start.go:83] releasing machines lock for "no-preload-658545", held for 19.509615446s
	I1004 04:24:34.250763   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.250965   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:34.254332   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.254720   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.254746   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.254982   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255550   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255745   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255843   66293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:24:34.255907   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.255973   66293 ssh_runner.go:195] Run: cat /version.json
	I1004 04:24:34.255996   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.258802   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259036   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259118   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.259143   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259309   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.259487   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.259538   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.259563   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259633   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.259752   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.259845   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.259891   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.260042   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.260180   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.362345   66293 ssh_runner.go:195] Run: systemctl --version
	I1004 04:24:34.368641   66293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:24:34.527679   66293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:24:34.534212   66293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:24:34.534291   66293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:24:34.553539   66293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:24:34.553570   66293 start.go:495] detecting cgroup driver to use...
	I1004 04:24:34.553638   66293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:24:34.573489   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:24:34.588220   66293 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:24:34.588281   66293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:24:34.606014   66293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:24:34.621246   66293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:24:34.749423   66293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:24:34.915880   66293 docker.go:233] disabling docker service ...
	I1004 04:24:34.915960   66293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:24:34.936625   66293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:24:34.951534   66293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:24:35.089398   66293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:24:35.225269   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:24:35.241006   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:24:35.261586   66293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:24:35.261651   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.273501   66293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:24:35.273571   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.285392   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.296475   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.307774   66293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:24:35.319241   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.330361   66293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.349013   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.360603   66293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:24:35.371516   66293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:24:35.371581   66293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:24:35.387209   66293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:24:35.398144   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:35.528196   66293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:24:35.629120   66293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:24:35.629198   66293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:24:35.634243   66293 start.go:563] Will wait 60s for crictl version
	I1004 04:24:35.634307   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:35.638372   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:24:35.678659   66293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:24:35.678763   66293 ssh_runner.go:195] Run: crio --version
	I1004 04:24:35.715285   66293 ssh_runner.go:195] Run: crio --version
	I1004 04:24:35.751571   66293 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:24:34.228500   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:36.727080   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:37.228706   67541 node_ready.go:49] node "default-k8s-diff-port-281471" has status "Ready":"True"
	I1004 04:24:37.228745   67541 node_ready.go:38] duration metric: took 7.005123712s for node "default-k8s-diff-port-281471" to be "Ready" ...
	I1004 04:24:37.228760   67541 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:37.235256   67541 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:35.752737   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:35.755375   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:35.755763   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:35.755818   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:35.756063   66293 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1004 04:24:35.760601   66293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:35.773870   66293 kubeadm.go:883] updating cluster {Name:no-preload-658545 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:24:35.773970   66293 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:24:35.774001   66293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:35.813619   66293 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:24:35.813650   66293 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:24:35.813736   66293 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:35.813756   66293 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.813785   66293 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1004 04:24:35.813796   66293 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.813877   66293 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:35.813740   66293 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.813758   66293 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.813771   66293 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.815271   66293 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:35.815277   66293 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1004 04:24:35.815292   66293 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.815271   66293 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.815276   66293 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:35.815353   66293 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.815358   66293 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.815402   66293 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.956470   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.963066   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.965110   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.970080   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.972477   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.988253   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.013802   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1004 04:24:36.063322   66293 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1004 04:24:36.063364   66293 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.063405   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214786   66293 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1004 04:24:36.214827   66293 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.214867   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214928   66293 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1004 04:24:36.214961   66293 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1004 04:24:36.214995   66293 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.215023   66293 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1004 04:24:36.215043   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214965   66293 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.215081   66293 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1004 04:24:36.215047   66293 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.215100   66293 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.215110   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.215139   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.215147   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.274103   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.274103   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.274185   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.274292   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.274329   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.274343   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.392523   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.405236   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.405257   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.408799   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.408857   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.408860   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.511001   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.568598   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.568658   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.568720   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.568929   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.569021   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.599594   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1004 04:24:36.599733   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.696242   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1004 04:24:36.696294   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1004 04:24:36.696336   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1004 04:24:36.696363   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:36.696390   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:36.696399   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:36.696401   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1004 04:24:36.696449   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1004 04:24:36.696507   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:36.696521   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:36.696508   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1004 04:24:36.696563   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.696613   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.701522   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1004 04:24:37.132809   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:33.874344   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:34.374158   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:34.873848   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:35.373944   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:35.874697   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.373831   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.874231   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:37.374723   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:37.873861   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:38.374206   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.050420   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:38.051653   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:39.242026   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:41.244977   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:39.289977   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.593422519s)
	I1004 04:24:39.290020   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1004 04:24:39.290087   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.593446646s)
	I1004 04:24:39.290114   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1004 04:24:39.290136   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:39.290158   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.593739386s)
	I1004 04:24:39.290175   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1004 04:24:39.290097   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.593563637s)
	I1004 04:24:39.290203   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.593795645s)
	I1004 04:24:39.290208   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:39.290213   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1004 04:24:39.290213   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1004 04:24:39.290265   66293 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.157417466s)
	I1004 04:24:39.290314   66293 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1004 04:24:39.290348   66293 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:39.290392   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:40.750955   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.460708297s)
	I1004 04:24:40.751065   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1004 04:24:40.751102   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:40.750969   66293 ssh_runner.go:235] Completed: which crictl: (1.460561899s)
	I1004 04:24:40.751159   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:40.751190   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:43.031349   66293 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.280136047s)
	I1004 04:24:43.031395   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.280209115s)
	I1004 04:24:43.031566   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1004 04:24:43.031493   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:43.031600   66293 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:43.031641   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:43.084191   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:38.873705   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:39.374361   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:39.874144   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.373793   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.873796   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:41.374744   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:41.874442   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:42.374561   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:42.874638   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:43.374677   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.548818   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:42.550744   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:43.742554   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:44.244427   67541 pod_ready.go:93] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.244453   67541 pod_ready.go:82] duration metric: took 7.009169057s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.244463   67541 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.250595   67541 pod_ready.go:93] pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.250617   67541 pod_ready.go:82] duration metric: took 6.147481ms for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.250625   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.256537   67541 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.256570   67541 pod_ready.go:82] duration metric: took 5.936641ms for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.256583   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.262681   67541 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.262707   67541 pod_ready.go:82] duration metric: took 6.115804ms for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.262721   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.271089   67541 pod_ready.go:93] pod "kube-proxy-4nnld" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.271124   67541 pod_ready.go:82] duration metric: took 8.394207ms for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.271138   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.640124   67541 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.640158   67541 pod_ready.go:82] duration metric: took 369.009816ms for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.640172   67541 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:46.647420   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:45.132971   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.101305613s)
	I1004 04:24:45.133043   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1004 04:24:45.133071   66293 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.048844025s)
	I1004 04:24:45.133079   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:45.133110   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1004 04:24:45.133135   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:45.133179   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:47.228047   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.094844592s)
	I1004 04:24:47.228087   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1004 04:24:47.228089   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.0949275s)
	I1004 04:24:47.228119   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1004 04:24:47.228154   66293 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:47.228214   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:43.874583   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:44.374117   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:44.874398   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.374755   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.874039   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:46.374598   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:46.874446   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:47.374384   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:47.874596   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:48.374021   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.049760   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:47.551861   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:48.647700   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:50.648288   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:52.649288   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:50.627043   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.398805191s)
	I1004 04:24:50.627085   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1004 04:24:50.627122   66293 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:50.627191   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:51.282056   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1004 04:24:51.282099   66293 cache_images.go:123] Successfully loaded all cached images
	I1004 04:24:51.282104   66293 cache_images.go:92] duration metric: took 15.468441268s to LoadCachedImages
	I1004 04:24:51.282116   66293 kubeadm.go:934] updating node { 192.168.72.54 8443 v1.31.1 crio true true} ...
	I1004 04:24:51.282243   66293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-658545 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:51.282321   66293 ssh_runner.go:195] Run: crio config
	I1004 04:24:51.333133   66293 cni.go:84] Creating CNI manager for ""
	I1004 04:24:51.333162   66293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:51.333173   66293 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:51.333201   66293 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-658545 NodeName:no-preload-658545 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:24:51.333361   66293 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-658545"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:51.333419   66293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:24:51.344694   66293 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:51.344757   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:51.354990   66293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1004 04:24:51.372572   66293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:51.394129   66293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1004 04:24:51.412865   66293 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:51.416985   66293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:51.430835   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:51.559349   66293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:51.579093   66293 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545 for IP: 192.168.72.54
	I1004 04:24:51.579120   66293 certs.go:194] generating shared ca certs ...
	I1004 04:24:51.579140   66293 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:51.579318   66293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:51.579378   66293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:51.579391   66293 certs.go:256] generating profile certs ...
	I1004 04:24:51.579494   66293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/client.key
	I1004 04:24:51.579588   66293 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.key.10ceac04
	I1004 04:24:51.579648   66293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.key
	I1004 04:24:51.579808   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:51.579849   66293 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:51.579861   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:51.579891   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:51.579926   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:51.579961   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:51.580018   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:51.580871   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:51.630190   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:51.667887   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:51.715372   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:51.750063   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1004 04:24:51.776606   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:24:51.808943   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:51.839165   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:24:51.867862   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:51.898026   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:51.926810   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:51.955416   66293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:51.977621   66293 ssh_runner.go:195] Run: openssl version
	I1004 04:24:51.984023   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:51.997672   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.002969   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.003039   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.009473   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:52.021001   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:52.032834   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.037679   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.037742   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.044012   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:52.055377   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:52.066222   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.070747   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.070794   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.076922   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:52.087952   66293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:52.093052   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:52.099710   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:52.105841   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:52.112092   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:52.118428   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:52.125380   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:52.132085   66293 kubeadm.go:392] StartCluster: {Name:no-preload-658545 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:52.132193   66293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:52.132254   66293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:52.171814   66293 cri.go:89] found id: ""
	I1004 04:24:52.171882   66293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:52.182484   66293 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:52.182508   66293 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:52.182559   66293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:52.193069   66293 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:52.194108   66293 kubeconfig.go:125] found "no-preload-658545" server: "https://192.168.72.54:8443"
	I1004 04:24:52.196237   66293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:52.206551   66293 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I1004 04:24:52.206584   66293 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:52.206598   66293 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:52.206657   66293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:52.249698   66293 cri.go:89] found id: ""
	I1004 04:24:52.249762   66293 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:52.266001   66293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:52.276056   66293 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:52.276081   66293 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:52.276128   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:24:52.285610   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:52.285677   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:52.295177   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:24:52.304309   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:52.304362   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:52.314126   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:24:52.323562   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:52.323618   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:52.332906   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:24:52.342199   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:52.342252   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:52.351661   66293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:52.361071   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:52.493171   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:48.874471   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:49.374480   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:49.874689   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.373726   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.874543   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:51.373743   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:51.874513   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:52.374719   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:52.874305   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:53.374419   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.049668   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:52.050522   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:55.147282   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:57.648169   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:53.586422   66293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.093219868s)
	I1004 04:24:53.586448   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:53.794085   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:53.872327   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:54.004418   66293 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:54.004510   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.505463   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.004602   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.036834   66293 api_server.go:72] duration metric: took 1.032414365s to wait for apiserver process to appear ...
	I1004 04:24:55.036858   66293 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:24:55.036877   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:55.037325   66293 api_server.go:269] stopped: https://192.168.72.54:8443/healthz: Get "https://192.168.72.54:8443/healthz": dial tcp 192.168.72.54:8443: connect: connection refused
	I1004 04:24:55.537513   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:57.951637   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:57.951663   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:57.951676   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.010162   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:58.010188   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:58.037484   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.060069   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:58.060161   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:53.874725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.373903   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.874127   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.374051   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.874019   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:56.373828   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:56.874027   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:57.373914   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:57.874598   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:58.374106   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.550080   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:56.550541   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:59.051837   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:58.536932   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.541611   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:58.541634   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:59.037723   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:59.057378   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:59.057411   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:59.536994   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:59.545827   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I1004 04:24:59.554199   66293 api_server.go:141] control plane version: v1.31.1
	I1004 04:24:59.554238   66293 api_server.go:131] duration metric: took 4.517373336s to wait for apiserver health ...
	I1004 04:24:59.554247   66293 cni.go:84] Creating CNI manager for ""
	I1004 04:24:59.554253   66293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:59.555912   66293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:24:59.557009   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:24:59.590146   66293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:24:59.610903   66293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:24:59.634067   66293 system_pods.go:59] 8 kube-system pods found
	I1004 04:24:59.634109   66293 system_pods.go:61] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:24:59.634121   66293 system_pods.go:61] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:24:59.634131   66293 system_pods.go:61] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:24:59.634143   66293 system_pods.go:61] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:24:59.634151   66293 system_pods.go:61] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 04:24:59.634160   66293 system_pods.go:61] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:24:59.634168   66293 system_pods.go:61] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:24:59.634181   66293 system_pods.go:61] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 04:24:59.634189   66293 system_pods.go:74] duration metric: took 23.257716ms to wait for pod list to return data ...
	I1004 04:24:59.634198   66293 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:24:59.638128   66293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:24:59.638160   66293 node_conditions.go:123] node cpu capacity is 2
	I1004 04:24:59.638173   66293 node_conditions.go:105] duration metric: took 3.969841ms to run NodePressure ...
	I1004 04:24:59.638191   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:59.968829   66293 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:24:59.975495   66293 kubeadm.go:739] kubelet initialised
	I1004 04:24:59.975516   66293 kubeadm.go:740] duration metric: took 6.660196ms waiting for restarted kubelet to initialise ...
	I1004 04:24:59.975522   66293 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:00.084084   66293 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.113474   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.113498   66293 pod_ready.go:82] duration metric: took 29.379607ms for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.113507   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.113513   66293 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.128436   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "etcd-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.128463   66293 pod_ready.go:82] duration metric: took 14.94278ms for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.128475   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "etcd-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.128485   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.140033   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-apiserver-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.140059   66293 pod_ready.go:82] duration metric: took 11.56545ms for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.140068   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-apiserver-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.140077   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.157254   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.157286   66293 pod_ready.go:82] duration metric: took 17.197805ms for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.157298   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.157306   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.415110   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-proxy-dvr6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.415141   66293 pod_ready.go:82] duration metric: took 257.824162ms for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.415151   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-proxy-dvr6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.415157   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.815201   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-scheduler-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.815226   66293 pod_ready.go:82] duration metric: took 400.063468ms for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.815235   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-scheduler-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.815241   66293 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:01.214416   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:01.214448   66293 pod_ready.go:82] duration metric: took 399.197779ms for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:01.214461   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:01.214468   66293 pod_ready.go:39] duration metric: took 1.238937842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:01.214484   66293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:25:01.227389   66293 ops.go:34] apiserver oom_adj: -16
	I1004 04:25:01.227414   66293 kubeadm.go:597] duration metric: took 9.044898439s to restartPrimaryControlPlane
	I1004 04:25:01.227424   66293 kubeadm.go:394] duration metric: took 9.095346513s to StartCluster
	I1004 04:25:01.227441   66293 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:25:01.227520   66293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:25:01.229057   66293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:25:01.229318   66293 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:25:01.229389   66293 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:25:01.229496   66293 addons.go:69] Setting storage-provisioner=true in profile "no-preload-658545"
	I1004 04:25:01.229505   66293 addons.go:69] Setting default-storageclass=true in profile "no-preload-658545"
	I1004 04:25:01.229512   66293 addons.go:234] Setting addon storage-provisioner=true in "no-preload-658545"
	W1004 04:25:01.229520   66293 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:25:01.229524   66293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-658545"
	I1004 04:25:01.229558   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.229562   66293 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:25:01.229557   66293 addons.go:69] Setting metrics-server=true in profile "no-preload-658545"
	I1004 04:25:01.229607   66293 addons.go:234] Setting addon metrics-server=true in "no-preload-658545"
	W1004 04:25:01.229621   66293 addons.go:243] addon metrics-server should already be in state true
	I1004 04:25:01.229655   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.229968   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.229987   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.229971   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.230013   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.230030   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.230133   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.231051   66293 out.go:177] * Verifying Kubernetes components...
	I1004 04:25:01.232578   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:25:01.256283   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38313
	I1004 04:25:01.256939   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.257689   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.257720   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.258124   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.258358   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.262593   66293 addons.go:234] Setting addon default-storageclass=true in "no-preload-658545"
	W1004 04:25:01.262620   66293 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:25:01.262652   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.263036   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.263117   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.274653   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I1004 04:25:01.275130   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.275655   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.275685   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.276062   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.276652   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.276697   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.277272   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I1004 04:25:01.277756   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.278175   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.278191   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.278548   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.279116   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.279163   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.283719   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1004 04:25:01.284316   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.284814   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.284836   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.285180   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.285751   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.285801   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.297682   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I1004 04:25:01.297859   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I1004 04:25:01.298298   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.298418   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.298975   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.298995   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.299058   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.299077   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.299407   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.299470   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.299618   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.299660   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.301552   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.302048   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.303197   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1004 04:25:01.303600   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.304053   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.304068   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.304124   66293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:25:01.304234   66293 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:25:01.304403   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.304571   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.305715   66293 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:25:01.305735   66293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:25:01.305850   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:25:01.305861   66293 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:25:01.305876   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.305752   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.306101   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.306321   66293 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:25:01.306334   66293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:25:01.306349   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.310374   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.310752   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.310776   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.310888   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.311057   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.311192   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.311272   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.311338   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.311603   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312049   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.312072   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312175   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.312201   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312302   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.312468   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.312497   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.312586   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.312658   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.312681   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.312811   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.312948   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.478533   66293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:25:01.511716   66293 node_ready.go:35] waiting up to 6m0s for node "no-preload-658545" to be "Ready" ...
	I1004 04:25:01.557879   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:25:01.574381   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:25:01.601090   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:25:01.601112   66293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:25:01.630465   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:25:01.630495   66293 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:25:01.681089   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:25:01.681118   66293 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:25:01.703024   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:25:02.053562   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.053585   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.053855   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.053871   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.053882   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.053891   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.054118   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.054139   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.054128   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.061624   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.061646   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.061949   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.061967   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.061985   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.580950   66293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.00653263s)
	I1004 04:25:02.581002   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.581014   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.581350   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.581368   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.581376   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.581382   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.581459   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.581594   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.581606   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.702713   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.702739   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.703015   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.703028   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.703090   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.703106   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.703117   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.703347   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.703363   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.703380   66293 addons.go:475] Verifying addon metrics-server=true in "no-preload-658545"
	I1004 04:25:02.705335   66293 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 04:24:59.648241   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:01.649424   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:02.706605   66293 addons.go:510] duration metric: took 1.477226s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 04:24:58.874143   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:59.373810   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:59.874682   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:00.374672   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:00.873725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.374175   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.874724   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:02.374725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:02.874746   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:03.373689   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.548783   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:03.549515   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:04.146633   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:06.147540   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.147626   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:03.516566   66293 node_ready.go:53] node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:06.022815   66293 node_ready.go:53] node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:03.874594   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:04.374498   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:04.874377   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:05.374050   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:05.374139   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:05.412153   67282 cri.go:89] found id: ""
	I1004 04:25:05.412185   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.412195   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:05.412202   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:05.412264   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:05.446725   67282 cri.go:89] found id: ""
	I1004 04:25:05.446750   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.446758   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:05.446763   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:05.446816   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:05.487652   67282 cri.go:89] found id: ""
	I1004 04:25:05.487678   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.487686   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:05.487691   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:05.487752   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:05.526275   67282 cri.go:89] found id: ""
	I1004 04:25:05.526302   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.526310   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:05.526319   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:05.526375   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:05.565004   67282 cri.go:89] found id: ""
	I1004 04:25:05.565034   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.565045   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:05.565052   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:05.565101   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:05.601963   67282 cri.go:89] found id: ""
	I1004 04:25:05.601990   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.601998   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:05.602003   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:05.602051   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:05.638621   67282 cri.go:89] found id: ""
	I1004 04:25:05.638651   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.638660   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:05.638666   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:05.638720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:05.678042   67282 cri.go:89] found id: ""
	I1004 04:25:05.678071   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.678082   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:05.678093   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:05.678107   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:05.720677   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:05.720707   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:05.775219   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:05.775252   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:05.789748   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:05.789774   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:05.918752   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:05.918783   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:05.918798   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:08.493206   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:05.551035   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.048870   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:10.148154   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.645708   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.516666   66293 node_ready.go:49] node "no-preload-658545" has status "Ready":"True"
	I1004 04:25:08.516690   66293 node_ready.go:38] duration metric: took 7.004939371s for node "no-preload-658545" to be "Ready" ...
	I1004 04:25:08.516699   66293 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:08.522101   66293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.527132   66293 pod_ready.go:93] pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:08.527153   66293 pod_ready.go:82] duration metric: took 5.024648ms for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.527162   66293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.534172   66293 pod_ready.go:93] pod "etcd-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:08.534195   66293 pod_ready.go:82] duration metric: took 7.027189ms for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.534204   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:10.541186   66293 pod_ready.go:103] pod "kube-apiserver-no-preload-658545" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.040607   66293 pod_ready.go:93] pod "kube-apiserver-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.040640   66293 pod_ready.go:82] duration metric: took 3.506428875s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.040654   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.045845   66293 pod_ready.go:93] pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.045870   66293 pod_ready.go:82] duration metric: took 5.207108ms for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.045883   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.051587   66293 pod_ready.go:93] pod "kube-proxy-dvr6b" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.051604   66293 pod_ready.go:82] duration metric: took 5.715328ms for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.051613   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.116361   66293 pod_ready.go:93] pod "kube-scheduler-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.116401   66293 pod_ready.go:82] duration metric: took 64.774234ms for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.116411   66293 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.506490   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:08.506549   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:08.545875   67282 cri.go:89] found id: ""
	I1004 04:25:08.545909   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.545920   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:08.545933   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:08.545997   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:08.582348   67282 cri.go:89] found id: ""
	I1004 04:25:08.582375   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.582383   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:08.582389   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:08.582438   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:08.637763   67282 cri.go:89] found id: ""
	I1004 04:25:08.637797   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.637809   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:08.637816   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:08.637890   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:08.681171   67282 cri.go:89] found id: ""
	I1004 04:25:08.681205   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.681216   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:08.681224   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:08.681289   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:08.719513   67282 cri.go:89] found id: ""
	I1004 04:25:08.719542   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.719549   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:08.719555   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:08.719607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:08.762152   67282 cri.go:89] found id: ""
	I1004 04:25:08.762175   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.762183   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:08.762188   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:08.762251   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:08.799857   67282 cri.go:89] found id: ""
	I1004 04:25:08.799881   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.799892   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:08.799903   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:08.799954   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:08.835264   67282 cri.go:89] found id: ""
	I1004 04:25:08.835296   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.835308   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:08.835318   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:08.835330   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:08.875501   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:08.875532   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:08.929145   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:08.929178   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:08.942769   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:08.942808   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:09.025372   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:09.025401   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:09.025416   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:11.611179   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:11.625118   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:11.625253   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:11.661512   67282 cri.go:89] found id: ""
	I1004 04:25:11.661540   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.661547   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:11.661553   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:11.661607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:11.704902   67282 cri.go:89] found id: ""
	I1004 04:25:11.704931   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.704941   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:11.704948   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:11.705007   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:11.741747   67282 cri.go:89] found id: ""
	I1004 04:25:11.741770   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.741780   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:11.741787   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:11.741841   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:11.776838   67282 cri.go:89] found id: ""
	I1004 04:25:11.776863   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.776871   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:11.776876   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:11.776927   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:11.812996   67282 cri.go:89] found id: ""
	I1004 04:25:11.813024   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.813033   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:11.813038   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:11.813097   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:11.853718   67282 cri.go:89] found id: ""
	I1004 04:25:11.853744   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.853752   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:11.853758   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:11.853813   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:11.896840   67282 cri.go:89] found id: ""
	I1004 04:25:11.896867   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.896879   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:11.896885   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:11.896943   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:11.932529   67282 cri.go:89] found id: ""
	I1004 04:25:11.932552   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.932561   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:11.932569   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:11.932580   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:11.946504   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:11.946538   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:12.024692   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:12.024713   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:12.024724   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:12.111942   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:12.111976   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:12.156483   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:12.156522   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:10.049912   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.051024   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.646058   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:16.647214   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.123343   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:16.622947   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.708243   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:14.722943   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:14.723007   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:14.758502   67282 cri.go:89] found id: ""
	I1004 04:25:14.758555   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.758567   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:14.758575   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:14.758633   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:14.796496   67282 cri.go:89] found id: ""
	I1004 04:25:14.796525   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.796532   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:14.796538   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:14.796595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:14.832216   67282 cri.go:89] found id: ""
	I1004 04:25:14.832247   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.832259   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:14.832266   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:14.832330   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:14.868461   67282 cri.go:89] found id: ""
	I1004 04:25:14.868491   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.868501   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:14.868509   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:14.868568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:14.909827   67282 cri.go:89] found id: ""
	I1004 04:25:14.909857   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.909867   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:14.909875   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:14.909949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:14.947809   67282 cri.go:89] found id: ""
	I1004 04:25:14.947839   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.947850   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:14.947857   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:14.947904   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:14.984073   67282 cri.go:89] found id: ""
	I1004 04:25:14.984101   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.984110   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:14.984115   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:14.984170   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:15.021145   67282 cri.go:89] found id: ""
	I1004 04:25:15.021179   67282 logs.go:282] 0 containers: []
	W1004 04:25:15.021191   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:15.021204   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:15.021217   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:15.075295   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:15.075328   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:15.088953   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:15.088980   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:15.175103   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:15.175128   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:15.175143   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:15.259004   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:15.259044   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:17.825029   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:17.839496   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:17.839574   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:17.877643   67282 cri.go:89] found id: ""
	I1004 04:25:17.877673   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.877684   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:17.877692   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:17.877751   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:17.921534   67282 cri.go:89] found id: ""
	I1004 04:25:17.921563   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.921574   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:17.921581   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:17.921634   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:17.961281   67282 cri.go:89] found id: ""
	I1004 04:25:17.961307   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.961315   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:17.961320   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:17.961386   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:18.001036   67282 cri.go:89] found id: ""
	I1004 04:25:18.001066   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.001078   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:18.001085   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:18.001156   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:18.043212   67282 cri.go:89] found id: ""
	I1004 04:25:18.043241   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.043252   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:18.043259   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:18.043319   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:18.082399   67282 cri.go:89] found id: ""
	I1004 04:25:18.082423   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.082430   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:18.082435   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:18.082493   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:18.120507   67282 cri.go:89] found id: ""
	I1004 04:25:18.120534   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.120544   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:18.120550   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:18.120605   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:18.156601   67282 cri.go:89] found id: ""
	I1004 04:25:18.156629   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.156640   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:18.156650   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:18.156663   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:18.198393   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:18.198424   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:18.250992   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:18.251032   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:18.267984   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:18.268015   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:18.343283   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:18.343303   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:18.343314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:14.549511   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:17.048940   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:19.051125   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:18.648462   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:21.146813   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.147244   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:18.624165   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:20.627159   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.123629   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:20.922578   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:20.938037   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:20.938122   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:20.978389   67282 cri.go:89] found id: ""
	I1004 04:25:20.978417   67282 logs.go:282] 0 containers: []
	W1004 04:25:20.978426   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:20.978431   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:20.978478   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:21.033490   67282 cri.go:89] found id: ""
	I1004 04:25:21.033520   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.033528   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:21.033533   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:21.033589   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:21.087168   67282 cri.go:89] found id: ""
	I1004 04:25:21.087198   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.087209   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:21.087216   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:21.087299   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:21.144327   67282 cri.go:89] found id: ""
	I1004 04:25:21.144356   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.144366   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:21.144373   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:21.144431   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:21.183336   67282 cri.go:89] found id: ""
	I1004 04:25:21.183378   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.183390   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:21.183397   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:21.183459   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:21.221847   67282 cri.go:89] found id: ""
	I1004 04:25:21.221878   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.221892   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:21.221901   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:21.221961   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:21.258542   67282 cri.go:89] found id: ""
	I1004 04:25:21.258573   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.258584   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:21.258590   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:21.258652   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:21.303173   67282 cri.go:89] found id: ""
	I1004 04:25:21.303202   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.303211   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:21.303218   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:21.303243   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:21.358109   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:21.358146   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:21.373958   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:21.373987   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:21.450956   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:21.450980   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:21.451006   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:21.534763   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:21.534807   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:21.550109   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.550304   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:25.148868   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:27.647698   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:25.622123   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:27.624777   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:24.082856   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:24.098263   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:24.098336   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:24.144969   67282 cri.go:89] found id: ""
	I1004 04:25:24.144999   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.145009   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:24.145015   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:24.145072   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:24.185670   67282 cri.go:89] found id: ""
	I1004 04:25:24.185693   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.185702   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:24.185708   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:24.185769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:24.223657   67282 cri.go:89] found id: ""
	I1004 04:25:24.223691   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.223703   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:24.223710   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:24.223769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:24.261841   67282 cri.go:89] found id: ""
	I1004 04:25:24.261864   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.261872   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:24.261878   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:24.261938   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:24.299734   67282 cri.go:89] found id: ""
	I1004 04:25:24.299758   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.299769   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:24.299775   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:24.299867   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:24.337413   67282 cri.go:89] found id: ""
	I1004 04:25:24.337440   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.337450   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:24.337457   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:24.337523   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:24.375963   67282 cri.go:89] found id: ""
	I1004 04:25:24.375995   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.376007   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:24.376014   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:24.376073   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:24.415978   67282 cri.go:89] found id: ""
	I1004 04:25:24.416010   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.416021   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:24.416030   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:24.416045   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:24.458703   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:24.458738   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:24.510669   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:24.510704   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:24.525646   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:24.525687   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:24.603280   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:24.603310   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:24.603324   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:27.184935   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:27.200241   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:27.200321   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:27.237546   67282 cri.go:89] found id: ""
	I1004 04:25:27.237576   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.237588   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:27.237596   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:27.237653   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:27.272598   67282 cri.go:89] found id: ""
	I1004 04:25:27.272625   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.272634   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:27.272642   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:27.272700   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:27.306659   67282 cri.go:89] found id: ""
	I1004 04:25:27.306693   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.306706   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:27.306715   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:27.306779   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:27.344315   67282 cri.go:89] found id: ""
	I1004 04:25:27.344349   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.344363   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:27.344370   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:27.344428   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:27.380231   67282 cri.go:89] found id: ""
	I1004 04:25:27.380267   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.380278   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:27.380286   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:27.380346   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:27.418137   67282 cri.go:89] found id: ""
	I1004 04:25:27.418161   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.418169   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:27.418174   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:27.418225   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:27.458235   67282 cri.go:89] found id: ""
	I1004 04:25:27.458262   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.458283   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:27.458289   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:27.458342   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:27.495161   67282 cri.go:89] found id: ""
	I1004 04:25:27.495189   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.495198   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:27.495206   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:27.495217   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:27.547749   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:27.547795   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:27.563322   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:27.563355   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:27.636682   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:27.636710   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:27.636725   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:27.711316   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:27.711354   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:26.050001   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:28.548322   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.147210   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:32.647027   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.122267   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:32.122501   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.250361   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:30.265789   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:30.265866   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:30.305127   67282 cri.go:89] found id: ""
	I1004 04:25:30.305166   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.305183   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:30.305190   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:30.305258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:30.346529   67282 cri.go:89] found id: ""
	I1004 04:25:30.346560   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.346570   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:30.346577   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:30.346641   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:30.387368   67282 cri.go:89] found id: ""
	I1004 04:25:30.387407   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.387418   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:30.387425   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:30.387489   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:30.428193   67282 cri.go:89] found id: ""
	I1004 04:25:30.428230   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.428242   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:30.428248   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:30.428308   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:30.465484   67282 cri.go:89] found id: ""
	I1004 04:25:30.465509   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.465518   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:30.465523   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:30.465573   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:30.501133   67282 cri.go:89] found id: ""
	I1004 04:25:30.501163   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.501174   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:30.501181   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:30.501248   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:30.536492   67282 cri.go:89] found id: ""
	I1004 04:25:30.536519   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.536530   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:30.536536   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:30.536587   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:30.571721   67282 cri.go:89] found id: ""
	I1004 04:25:30.571745   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.571753   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:30.571761   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:30.571771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:30.626922   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:30.626958   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:30.641817   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:30.641852   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:30.725604   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:30.725633   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:30.725647   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:30.800359   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:30.800393   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:33.340747   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:33.355862   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:33.355936   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:33.397628   67282 cri.go:89] found id: ""
	I1004 04:25:33.397655   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.397662   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:33.397668   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:33.397718   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:33.442100   67282 cri.go:89] found id: ""
	I1004 04:25:33.442128   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.442137   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:33.442142   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:33.442187   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:33.481035   67282 cri.go:89] found id: ""
	I1004 04:25:33.481063   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.481076   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:33.481083   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:33.481149   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:30.549816   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:33.048791   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:35.147125   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:37.647224   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:34.122573   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:36.622639   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:33.516633   67282 cri.go:89] found id: ""
	I1004 04:25:33.516661   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.516669   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:33.516677   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:33.516727   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:33.556569   67282 cri.go:89] found id: ""
	I1004 04:25:33.556600   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.556610   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:33.556617   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:33.556679   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:33.591678   67282 cri.go:89] found id: ""
	I1004 04:25:33.591715   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.591724   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:33.591731   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:33.591786   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:33.626571   67282 cri.go:89] found id: ""
	I1004 04:25:33.626594   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.626602   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:33.626607   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:33.626650   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:33.664336   67282 cri.go:89] found id: ""
	I1004 04:25:33.664359   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.664367   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:33.664375   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:33.664386   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:33.748013   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:33.748047   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:33.786730   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:33.786767   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:33.839355   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:33.839392   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:33.853807   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:33.853835   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:33.920183   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:36.420485   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:36.435150   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:36.435221   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:36.471818   67282 cri.go:89] found id: ""
	I1004 04:25:36.471842   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.471850   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:36.471855   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:36.471908   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:36.511469   67282 cri.go:89] found id: ""
	I1004 04:25:36.511496   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.511504   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:36.511509   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:36.511557   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:36.552607   67282 cri.go:89] found id: ""
	I1004 04:25:36.552633   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.552641   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:36.552646   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:36.552702   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:36.596260   67282 cri.go:89] found id: ""
	I1004 04:25:36.596282   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.596290   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:36.596295   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:36.596340   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:36.636674   67282 cri.go:89] found id: ""
	I1004 04:25:36.636700   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.636708   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:36.636713   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:36.636764   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:36.675155   67282 cri.go:89] found id: ""
	I1004 04:25:36.675194   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.675206   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:36.675214   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:36.675279   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:36.713458   67282 cri.go:89] found id: ""
	I1004 04:25:36.713485   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.713493   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:36.713498   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:36.713552   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:36.754567   67282 cri.go:89] found id: ""
	I1004 04:25:36.754596   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.754607   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:36.754618   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:36.754631   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:36.824413   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:36.824439   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:36.824453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:36.900438   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:36.900471   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:36.942238   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:36.942264   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:36.992527   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:36.992556   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:35.050546   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:37.548965   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:39.647505   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:42.146720   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:38.623559   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:41.121785   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:43.122437   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:39.506599   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:39.520782   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:39.520854   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:39.561853   67282 cri.go:89] found id: ""
	I1004 04:25:39.561880   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.561891   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:39.561898   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:39.561955   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:39.597548   67282 cri.go:89] found id: ""
	I1004 04:25:39.597581   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.597591   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:39.597598   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:39.597659   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:39.634481   67282 cri.go:89] found id: ""
	I1004 04:25:39.634517   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.634525   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:39.634530   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:39.634575   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:39.677077   67282 cri.go:89] found id: ""
	I1004 04:25:39.677107   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.677117   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:39.677124   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:39.677185   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:39.716334   67282 cri.go:89] found id: ""
	I1004 04:25:39.716356   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.716364   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:39.716369   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:39.716416   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:39.754765   67282 cri.go:89] found id: ""
	I1004 04:25:39.754792   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.754803   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:39.754810   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:39.754863   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:39.788782   67282 cri.go:89] found id: ""
	I1004 04:25:39.788811   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.788824   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:39.788832   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:39.788890   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:39.821946   67282 cri.go:89] found id: ""
	I1004 04:25:39.821970   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.821979   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:39.821988   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:39.822001   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:39.892629   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:39.892657   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:39.892674   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:39.973480   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:39.973515   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:40.018175   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:40.018203   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:40.068585   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:40.068620   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:42.583639   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:42.597249   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:42.597333   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:42.631993   67282 cri.go:89] found id: ""
	I1004 04:25:42.632020   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.632030   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:42.632037   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:42.632091   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:42.669708   67282 cri.go:89] found id: ""
	I1004 04:25:42.669739   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.669749   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:42.669762   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:42.669836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:42.705995   67282 cri.go:89] found id: ""
	I1004 04:25:42.706019   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.706030   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:42.706037   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:42.706094   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:42.740436   67282 cri.go:89] found id: ""
	I1004 04:25:42.740458   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.740466   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:42.740472   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:42.740524   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:42.774516   67282 cri.go:89] found id: ""
	I1004 04:25:42.774546   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.774557   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:42.774564   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:42.774614   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:42.807471   67282 cri.go:89] found id: ""
	I1004 04:25:42.807502   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.807510   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:42.807516   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:42.807561   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:42.851943   67282 cri.go:89] found id: ""
	I1004 04:25:42.851968   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.851977   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:42.851983   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:42.852040   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:42.887762   67282 cri.go:89] found id: ""
	I1004 04:25:42.887801   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.887812   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:42.887822   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:42.887834   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:42.960398   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:42.960423   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:42.960440   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:43.040078   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:43.040117   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:43.081614   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:43.081638   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:43.132744   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:43.132781   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:39.551722   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:42.049418   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:44.049835   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:44.646919   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:47.146884   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:45.622878   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.122299   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:45.647332   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:45.660765   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:45.660834   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:45.696351   67282 cri.go:89] found id: ""
	I1004 04:25:45.696379   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.696390   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:45.696397   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:45.696449   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:45.738529   67282 cri.go:89] found id: ""
	I1004 04:25:45.738553   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.738561   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:45.738566   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:45.738621   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:45.773071   67282 cri.go:89] found id: ""
	I1004 04:25:45.773094   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.773103   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:45.773110   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:45.773165   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:45.810813   67282 cri.go:89] found id: ""
	I1004 04:25:45.810840   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.810852   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:45.810859   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:45.810913   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:45.848916   67282 cri.go:89] found id: ""
	I1004 04:25:45.848942   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.848951   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:45.848956   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:45.849014   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:45.886737   67282 cri.go:89] found id: ""
	I1004 04:25:45.886763   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.886772   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:45.886778   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:45.886825   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:45.922263   67282 cri.go:89] found id: ""
	I1004 04:25:45.922291   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.922301   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:45.922307   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:45.922364   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:45.956688   67282 cri.go:89] found id: ""
	I1004 04:25:45.956710   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.956718   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:45.956725   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:45.956737   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:46.007334   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:46.007365   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:46.020892   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:46.020916   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:46.089786   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:46.089809   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:46.089822   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:46.175987   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:46.176017   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:46.549153   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.549893   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:49.147322   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:51.647365   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:50.622540   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:52.623714   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.718354   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:48.733291   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:48.733347   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:48.769149   67282 cri.go:89] found id: ""
	I1004 04:25:48.769175   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.769185   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:48.769193   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:48.769249   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:48.804386   67282 cri.go:89] found id: ""
	I1004 04:25:48.804410   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.804418   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:48.804423   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:48.804467   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:48.841747   67282 cri.go:89] found id: ""
	I1004 04:25:48.841774   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.841782   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:48.841788   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:48.841836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:48.880025   67282 cri.go:89] found id: ""
	I1004 04:25:48.880048   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.880058   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:48.880064   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:48.880121   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:48.916506   67282 cri.go:89] found id: ""
	I1004 04:25:48.916530   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.916540   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:48.916547   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:48.916607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:48.952082   67282 cri.go:89] found id: ""
	I1004 04:25:48.952105   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.952116   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:48.952122   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:48.952177   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:48.986097   67282 cri.go:89] found id: ""
	I1004 04:25:48.986124   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.986135   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:48.986143   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:48.986210   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:49.020400   67282 cri.go:89] found id: ""
	I1004 04:25:49.020428   67282 logs.go:282] 0 containers: []
	W1004 04:25:49.020436   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:49.020445   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:49.020462   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:49.074724   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:49.074754   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:49.088504   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:49.088529   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:49.165940   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:49.165961   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:49.165972   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:49.244482   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:49.244519   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:51.786086   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:51.800644   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:51.800720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:51.839951   67282 cri.go:89] found id: ""
	I1004 04:25:51.839980   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.839990   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:51.839997   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:51.840055   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:51.878660   67282 cri.go:89] found id: ""
	I1004 04:25:51.878684   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.878695   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:51.878701   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:51.878762   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:51.916640   67282 cri.go:89] found id: ""
	I1004 04:25:51.916665   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.916672   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:51.916678   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:51.916725   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:51.953800   67282 cri.go:89] found id: ""
	I1004 04:25:51.953827   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.953835   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:51.953840   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:51.953897   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:51.993107   67282 cri.go:89] found id: ""
	I1004 04:25:51.993139   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.993150   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:51.993157   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:51.993214   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:52.027426   67282 cri.go:89] found id: ""
	I1004 04:25:52.027454   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.027464   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:52.027470   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:52.027521   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:52.063608   67282 cri.go:89] found id: ""
	I1004 04:25:52.063638   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.063650   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:52.063657   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:52.063717   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:52.100052   67282 cri.go:89] found id: ""
	I1004 04:25:52.100083   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.100094   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:52.100106   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:52.100125   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:52.113801   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:52.113827   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:52.201284   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:52.201311   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:52.201322   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:52.280014   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:52.280047   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:52.318120   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:52.318145   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:51.048719   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:53.050304   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:54.146697   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:56.147015   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:58.148736   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:55.122546   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:57.123051   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:54.872245   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:54.886914   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:54.886990   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:54.927117   67282 cri.go:89] found id: ""
	I1004 04:25:54.927144   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.927152   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:54.927157   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:54.927205   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:54.962510   67282 cri.go:89] found id: ""
	I1004 04:25:54.962540   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.962552   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:54.962559   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:54.962619   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:54.996812   67282 cri.go:89] found id: ""
	I1004 04:25:54.996839   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.996848   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:54.996854   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:54.996905   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:55.034557   67282 cri.go:89] found id: ""
	I1004 04:25:55.034587   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.034597   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:55.034605   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:55.034667   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:55.072383   67282 cri.go:89] found id: ""
	I1004 04:25:55.072416   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.072427   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:55.072434   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:55.072494   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:55.121561   67282 cri.go:89] found id: ""
	I1004 04:25:55.121588   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.121598   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:55.121604   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:55.121775   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:55.165525   67282 cri.go:89] found id: ""
	I1004 04:25:55.165553   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.165564   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:55.165570   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:55.165627   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:55.201808   67282 cri.go:89] found id: ""
	I1004 04:25:55.201836   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.201846   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:55.201857   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:55.201870   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:55.280889   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:55.280917   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:55.280932   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:55.354979   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:55.355012   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:55.397144   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:55.397174   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:55.448710   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:55.448746   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:57.963840   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:57.977027   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:57.977085   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:58.019244   67282 cri.go:89] found id: ""
	I1004 04:25:58.019273   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.019285   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:58.019293   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:58.019351   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:58.057979   67282 cri.go:89] found id: ""
	I1004 04:25:58.058008   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.058018   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:58.058027   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:58.058084   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:58.094607   67282 cri.go:89] found id: ""
	I1004 04:25:58.094639   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.094652   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:58.094658   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:58.094726   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:58.130150   67282 cri.go:89] found id: ""
	I1004 04:25:58.130177   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.130188   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:58.130196   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:58.130259   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:58.167662   67282 cri.go:89] found id: ""
	I1004 04:25:58.167691   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.167701   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:58.167709   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:58.167769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:58.203480   67282 cri.go:89] found id: ""
	I1004 04:25:58.203568   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.203585   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:58.203594   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:58.203662   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:58.239516   67282 cri.go:89] found id: ""
	I1004 04:25:58.239537   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.239545   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:58.239551   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:58.239595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:58.275525   67282 cri.go:89] found id: ""
	I1004 04:25:58.275553   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.275564   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:58.275574   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:58.275587   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:58.331191   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:58.331224   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:58.345629   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:58.345659   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:58.416297   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:58.416315   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:58.416326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:58.490659   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:58.490694   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:55.548913   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:57.549457   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:00.647858   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.146570   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:59.623396   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.624074   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.030058   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:01.044568   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:01.044659   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:01.082652   67282 cri.go:89] found id: ""
	I1004 04:26:01.082679   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.082688   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:01.082694   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:01.082750   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:01.120781   67282 cri.go:89] found id: ""
	I1004 04:26:01.120805   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.120814   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:01.120821   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:01.120878   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:01.159494   67282 cri.go:89] found id: ""
	I1004 04:26:01.159523   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.159531   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:01.159537   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:01.159584   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:01.195482   67282 cri.go:89] found id: ""
	I1004 04:26:01.195512   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.195521   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:01.195529   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:01.195589   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:01.233971   67282 cri.go:89] found id: ""
	I1004 04:26:01.233996   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.234006   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:01.234014   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:01.234076   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:01.275935   67282 cri.go:89] found id: ""
	I1004 04:26:01.275958   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.275966   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:01.275971   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:01.276018   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:01.315512   67282 cri.go:89] found id: ""
	I1004 04:26:01.315535   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.315543   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:01.315548   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:01.315603   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:01.356465   67282 cri.go:89] found id: ""
	I1004 04:26:01.356491   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.356505   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:01.356513   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:01.356523   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:01.409237   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:01.409280   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:01.423426   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:01.423453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:01.501372   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:01.501397   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:01.501413   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:01.591087   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:01.591131   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:59.549485   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.550138   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.550258   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:05.646818   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:07.647322   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.634636   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:06.122840   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:04.152506   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:04.166847   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:04.166911   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:04.203138   67282 cri.go:89] found id: ""
	I1004 04:26:04.203167   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.203177   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:04.203184   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:04.203243   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:04.237427   67282 cri.go:89] found id: ""
	I1004 04:26:04.237453   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.237464   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:04.237471   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:04.237525   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:04.272468   67282 cri.go:89] found id: ""
	I1004 04:26:04.272499   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.272511   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:04.272518   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:04.272584   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:04.307347   67282 cri.go:89] found id: ""
	I1004 04:26:04.307373   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.307384   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:04.307390   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:04.307448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:04.342450   67282 cri.go:89] found id: ""
	I1004 04:26:04.342487   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.342498   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:04.342506   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:04.342568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:04.382846   67282 cri.go:89] found id: ""
	I1004 04:26:04.382874   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.382885   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:04.382893   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:04.382945   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:04.418234   67282 cri.go:89] found id: ""
	I1004 04:26:04.418260   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.418268   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:04.418273   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:04.418328   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:04.453433   67282 cri.go:89] found id: ""
	I1004 04:26:04.453456   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.453464   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:04.453473   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:04.453487   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:04.502093   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:04.502123   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:04.515865   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:04.515897   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:04.595672   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:04.595698   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:04.595713   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:04.675273   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:04.675304   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:07.214965   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:07.229495   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:07.229568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:07.268541   67282 cri.go:89] found id: ""
	I1004 04:26:07.268580   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.268591   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:07.268599   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:07.268662   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:07.321382   67282 cri.go:89] found id: ""
	I1004 04:26:07.321414   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.321424   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:07.321431   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:07.321490   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:07.379840   67282 cri.go:89] found id: ""
	I1004 04:26:07.379869   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.379878   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:07.379884   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:07.379928   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:07.431304   67282 cri.go:89] found id: ""
	I1004 04:26:07.431333   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.431343   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:07.431349   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:07.431407   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:07.466853   67282 cri.go:89] found id: ""
	I1004 04:26:07.466880   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.466888   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:07.466893   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:07.466951   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:07.501587   67282 cri.go:89] found id: ""
	I1004 04:26:07.501613   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.501624   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:07.501630   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:07.501685   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:07.536326   67282 cri.go:89] found id: ""
	I1004 04:26:07.536354   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.536364   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:07.536371   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:07.536426   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:07.575257   67282 cri.go:89] found id: ""
	I1004 04:26:07.575283   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.575292   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:07.575299   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:07.575310   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:07.629477   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:07.629515   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:07.643294   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:07.643326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:07.720324   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:07.720350   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:07.720365   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:07.797641   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:07.797678   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:06.049580   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:08.548786   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.146544   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:12.146842   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:08.622497   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.622759   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:12.624285   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.339392   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:10.353341   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:10.353397   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:10.391023   67282 cri.go:89] found id: ""
	I1004 04:26:10.391049   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.391059   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:10.391066   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:10.391129   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:10.424345   67282 cri.go:89] found id: ""
	I1004 04:26:10.424376   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.424388   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:10.424396   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:10.424466   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:10.459344   67282 cri.go:89] found id: ""
	I1004 04:26:10.459374   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.459387   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:10.459394   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:10.459451   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:10.494898   67282 cri.go:89] found id: ""
	I1004 04:26:10.494921   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.494929   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:10.494935   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:10.494982   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:10.531084   67282 cri.go:89] found id: ""
	I1004 04:26:10.531111   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.531122   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:10.531129   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:10.531185   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:10.566918   67282 cri.go:89] found id: ""
	I1004 04:26:10.566949   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.566960   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:10.566967   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:10.567024   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:10.604888   67282 cri.go:89] found id: ""
	I1004 04:26:10.604923   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.604935   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:10.604942   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:10.605013   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:10.641578   67282 cri.go:89] found id: ""
	I1004 04:26:10.641606   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.641620   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:10.641631   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:10.641648   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:10.696848   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:10.696882   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:10.710393   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:10.710417   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:10.780854   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:10.780881   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:10.780895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:10.861732   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:10.861771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:13.403231   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:13.417246   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:13.417319   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:13.451581   67282 cri.go:89] found id: ""
	I1004 04:26:13.451607   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.451616   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:13.451621   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:13.451681   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:13.488362   67282 cri.go:89] found id: ""
	I1004 04:26:13.488388   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.488396   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:13.488401   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:13.488449   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:10.549905   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:13.048997   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:14.646627   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:16.647879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:15.123067   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:17.622729   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:13.522697   67282 cri.go:89] found id: ""
	I1004 04:26:13.522729   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.522740   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:13.522751   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:13.522803   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:13.564926   67282 cri.go:89] found id: ""
	I1004 04:26:13.564959   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.564972   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:13.564981   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:13.565058   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:13.600582   67282 cri.go:89] found id: ""
	I1004 04:26:13.600612   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.600622   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:13.600630   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:13.600688   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:13.634550   67282 cri.go:89] found id: ""
	I1004 04:26:13.634575   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.634584   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:13.634591   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:13.634646   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:13.669281   67282 cri.go:89] found id: ""
	I1004 04:26:13.669311   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.669320   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:13.669326   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:13.669388   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:13.707664   67282 cri.go:89] found id: ""
	I1004 04:26:13.707693   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.707703   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:13.707713   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:13.707727   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:13.721127   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:13.721168   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:13.788026   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:13.788051   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:13.788067   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:13.864505   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:13.864542   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:13.902896   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:13.902921   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:16.456813   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:16.470071   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:16.470138   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:16.506085   67282 cri.go:89] found id: ""
	I1004 04:26:16.506114   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.506125   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:16.506133   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:16.506189   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:16.540016   67282 cri.go:89] found id: ""
	I1004 04:26:16.540044   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.540052   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:16.540056   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:16.540100   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:16.579247   67282 cri.go:89] found id: ""
	I1004 04:26:16.579272   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.579280   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:16.579285   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:16.579332   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:16.615552   67282 cri.go:89] found id: ""
	I1004 04:26:16.615579   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.615601   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:16.615621   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:16.615675   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:16.652639   67282 cri.go:89] found id: ""
	I1004 04:26:16.652660   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.652671   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:16.652678   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:16.652732   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:16.689607   67282 cri.go:89] found id: ""
	I1004 04:26:16.689631   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.689643   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:16.689650   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:16.689720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:16.724430   67282 cri.go:89] found id: ""
	I1004 04:26:16.724458   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.724469   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:16.724475   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:16.724534   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:16.758378   67282 cri.go:89] found id: ""
	I1004 04:26:16.758412   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.758423   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:16.758434   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:16.758454   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:16.826234   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:16.826259   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:16.826273   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:16.906908   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:16.906945   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:16.950295   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:16.950321   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:17.002216   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:17.002253   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:15.549441   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:17.549816   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.147105   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:21.147403   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.622982   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:21.624073   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.516253   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:19.529664   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:19.529726   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:19.566669   67282 cri.go:89] found id: ""
	I1004 04:26:19.566700   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.566711   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:19.566718   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:19.566772   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:19.605923   67282 cri.go:89] found id: ""
	I1004 04:26:19.605951   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.605961   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:19.605968   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:19.606025   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:19.645132   67282 cri.go:89] found id: ""
	I1004 04:26:19.645158   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.645168   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:19.645175   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:19.645235   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:19.687135   67282 cri.go:89] found id: ""
	I1004 04:26:19.687160   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.687171   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:19.687178   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:19.687256   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:19.724180   67282 cri.go:89] found id: ""
	I1004 04:26:19.724213   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.724224   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:19.724230   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:19.724295   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:19.761608   67282 cri.go:89] found id: ""
	I1004 04:26:19.761638   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.761649   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:19.761656   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:19.761714   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:19.795060   67282 cri.go:89] found id: ""
	I1004 04:26:19.795089   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.795099   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:19.795106   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:19.795164   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:19.835678   67282 cri.go:89] found id: ""
	I1004 04:26:19.835703   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.835712   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:19.835722   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:19.835736   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:19.889508   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:19.889543   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:19.903206   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:19.903233   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:19.973445   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:19.973471   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:19.973485   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:20.053996   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:20.054034   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:22.594171   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:22.609084   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:22.609145   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:22.650423   67282 cri.go:89] found id: ""
	I1004 04:26:22.650449   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.650459   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:22.650466   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:22.650525   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:22.686420   67282 cri.go:89] found id: ""
	I1004 04:26:22.686450   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.686461   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:22.686469   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:22.686535   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:22.721385   67282 cri.go:89] found id: ""
	I1004 04:26:22.721408   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.721416   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:22.721421   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:22.721484   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:22.765461   67282 cri.go:89] found id: ""
	I1004 04:26:22.765492   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.765504   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:22.765511   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:22.765569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:22.798192   67282 cri.go:89] found id: ""
	I1004 04:26:22.798220   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.798230   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:22.798235   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:22.798293   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:22.833110   67282 cri.go:89] found id: ""
	I1004 04:26:22.833138   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.833147   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:22.833153   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:22.833212   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:22.875653   67282 cri.go:89] found id: ""
	I1004 04:26:22.875684   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.875696   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:22.875704   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:22.875766   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:22.913906   67282 cri.go:89] found id: ""
	I1004 04:26:22.913931   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.913938   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:22.913946   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:22.913957   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:22.969480   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:22.969511   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:22.983475   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:22.983500   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:23.059953   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:23.059982   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:23.059996   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:23.139106   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:23.139134   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:19.550307   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:22.048618   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:23.647507   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:26.147135   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:24.122370   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:26.122976   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:25.678489   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:25.692648   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:25.692705   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:25.728232   67282 cri.go:89] found id: ""
	I1004 04:26:25.728261   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.728269   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:25.728276   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:25.728335   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:25.763956   67282 cri.go:89] found id: ""
	I1004 04:26:25.763982   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.763991   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:25.763998   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:25.764057   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:25.799715   67282 cri.go:89] found id: ""
	I1004 04:26:25.799743   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.799753   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:25.799761   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:25.799840   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:25.834823   67282 cri.go:89] found id: ""
	I1004 04:26:25.834855   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.834866   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:25.834873   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:25.834933   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:25.869194   67282 cri.go:89] found id: ""
	I1004 04:26:25.869224   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.869235   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:25.869242   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:25.869303   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:25.903514   67282 cri.go:89] found id: ""
	I1004 04:26:25.903543   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.903553   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:25.903558   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:25.903606   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:25.939887   67282 cri.go:89] found id: ""
	I1004 04:26:25.939919   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.939930   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:25.939938   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:25.939996   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:25.981922   67282 cri.go:89] found id: ""
	I1004 04:26:25.981944   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.981952   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:25.981960   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:25.981971   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:26.064860   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:26.064891   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:26.105272   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:26.105296   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:26.162602   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:26.162640   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:26.176408   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:26.176439   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:26.242264   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:24.549364   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:27.049470   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.646788   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.146205   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:33.146879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.622691   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.122181   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:33.123226   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.742417   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:28.755655   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:28.755723   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:28.789338   67282 cri.go:89] found id: ""
	I1004 04:26:28.789361   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.789369   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:28.789374   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:28.789420   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:28.823513   67282 cri.go:89] found id: ""
	I1004 04:26:28.823544   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.823555   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:28.823562   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:28.823619   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:28.858826   67282 cri.go:89] found id: ""
	I1004 04:26:28.858854   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.858866   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:28.858873   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:28.858927   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:28.892552   67282 cri.go:89] found id: ""
	I1004 04:26:28.892579   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.892587   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:28.892593   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:28.892639   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:28.929250   67282 cri.go:89] found id: ""
	I1004 04:26:28.929277   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.929284   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:28.929289   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:28.929335   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:28.966554   67282 cri.go:89] found id: ""
	I1004 04:26:28.966581   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.966589   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:28.966594   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:28.966642   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:28.999930   67282 cri.go:89] found id: ""
	I1004 04:26:28.999954   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.999964   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:28.999970   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:29.000025   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:29.033687   67282 cri.go:89] found id: ""
	I1004 04:26:29.033717   67282 logs.go:282] 0 containers: []
	W1004 04:26:29.033727   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:29.033737   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:29.033752   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:29.109486   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:29.109523   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:29.149125   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:29.149152   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:29.197830   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:29.197861   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:29.211182   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:29.211204   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:29.276808   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:31.777659   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:31.791374   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:31.791425   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:31.825453   67282 cri.go:89] found id: ""
	I1004 04:26:31.825480   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.825489   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:31.825495   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:31.825553   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:31.857845   67282 cri.go:89] found id: ""
	I1004 04:26:31.857875   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.857884   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:31.857893   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:31.857949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:31.892282   67282 cri.go:89] found id: ""
	I1004 04:26:31.892309   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.892317   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:31.892322   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:31.892366   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:31.926016   67282 cri.go:89] found id: ""
	I1004 04:26:31.926037   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.926045   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:31.926051   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:31.926094   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:31.961382   67282 cri.go:89] found id: ""
	I1004 04:26:31.961415   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.961425   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:31.961433   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:31.961492   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:31.994570   67282 cri.go:89] found id: ""
	I1004 04:26:31.994602   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.994613   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:31.994620   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:31.994675   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:32.027359   67282 cri.go:89] found id: ""
	I1004 04:26:32.027383   67282 logs.go:282] 0 containers: []
	W1004 04:26:32.027391   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:32.027397   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:32.027448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:32.063518   67282 cri.go:89] found id: ""
	I1004 04:26:32.063545   67282 logs.go:282] 0 containers: []
	W1004 04:26:32.063555   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:32.063565   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:32.063577   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:32.151555   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:32.151582   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:32.190678   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:32.190700   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:32.243567   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:32.243596   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:32.256293   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:32.256320   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:32.329513   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:29.548687   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.550184   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:34.050659   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:35.147870   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:37.646571   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:35.623302   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:38.122555   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:34.830126   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:34.844760   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:34.844833   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:34.878409   67282 cri.go:89] found id: ""
	I1004 04:26:34.878433   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.878440   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:34.878445   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:34.878500   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:34.916493   67282 cri.go:89] found id: ""
	I1004 04:26:34.916516   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.916524   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:34.916532   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:34.916577   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:34.954532   67282 cri.go:89] found id: ""
	I1004 04:26:34.954556   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.954565   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:34.954570   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:34.954616   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:34.987163   67282 cri.go:89] found id: ""
	I1004 04:26:34.987190   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.987198   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:34.987205   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:34.987261   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:35.021351   67282 cri.go:89] found id: ""
	I1004 04:26:35.021379   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.021388   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:35.021394   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:35.021452   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:35.056350   67282 cri.go:89] found id: ""
	I1004 04:26:35.056376   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.056384   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:35.056390   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:35.056448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:35.093375   67282 cri.go:89] found id: ""
	I1004 04:26:35.093402   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.093412   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:35.093420   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:35.093486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:35.130509   67282 cri.go:89] found id: ""
	I1004 04:26:35.130532   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.130541   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:35.130549   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:35.130562   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:35.188138   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:35.188174   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:35.202226   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:35.202267   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:35.276652   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:35.276675   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:35.276688   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:35.357339   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:35.357373   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:37.898166   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:37.911319   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:37.911387   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:37.944551   67282 cri.go:89] found id: ""
	I1004 04:26:37.944578   67282 logs.go:282] 0 containers: []
	W1004 04:26:37.944590   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:37.944597   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:37.944652   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:37.978066   67282 cri.go:89] found id: ""
	I1004 04:26:37.978093   67282 logs.go:282] 0 containers: []
	W1004 04:26:37.978101   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:37.978107   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:37.978163   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:38.011065   67282 cri.go:89] found id: ""
	I1004 04:26:38.011095   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.011104   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:38.011109   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:38.011156   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:38.050323   67282 cri.go:89] found id: ""
	I1004 04:26:38.050349   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.050359   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:38.050366   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:38.050425   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:38.089141   67282 cri.go:89] found id: ""
	I1004 04:26:38.089169   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.089177   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:38.089182   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:38.089258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:38.122625   67282 cri.go:89] found id: ""
	I1004 04:26:38.122653   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.122663   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:38.122671   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:38.122719   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:38.159957   67282 cri.go:89] found id: ""
	I1004 04:26:38.159982   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.159990   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:38.159996   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:38.160085   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:38.194592   67282 cri.go:89] found id: ""
	I1004 04:26:38.194618   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.194626   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:38.194646   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:38.194657   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:38.263914   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:38.263945   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:38.263958   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:38.339864   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:38.339895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:38.375477   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:38.375505   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:38.428292   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:38.428320   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:36.050815   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:38.548602   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:39.646794   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.146914   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:40.123280   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.623659   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:40.941910   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:40.955041   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:40.955117   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:40.991278   67282 cri.go:89] found id: ""
	I1004 04:26:40.991307   67282 logs.go:282] 0 containers: []
	W1004 04:26:40.991317   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:40.991325   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:40.991389   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:41.025347   67282 cri.go:89] found id: ""
	I1004 04:26:41.025373   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.025385   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:41.025392   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:41.025450   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:41.060974   67282 cri.go:89] found id: ""
	I1004 04:26:41.061001   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.061019   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:41.061026   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:41.061087   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:41.097557   67282 cri.go:89] found id: ""
	I1004 04:26:41.097587   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.097598   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:41.097605   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:41.097665   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:41.136371   67282 cri.go:89] found id: ""
	I1004 04:26:41.136396   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.136405   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:41.136412   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:41.136472   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:41.172590   67282 cri.go:89] found id: ""
	I1004 04:26:41.172617   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.172627   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:41.172634   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:41.172687   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:41.209124   67282 cri.go:89] found id: ""
	I1004 04:26:41.209146   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.209154   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:41.209159   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:41.209214   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:41.250654   67282 cri.go:89] found id: ""
	I1004 04:26:41.250687   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.250699   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:41.250709   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:41.250723   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:41.305814   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:41.305864   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:41.322961   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:41.322989   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:41.427611   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:41.427632   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:41.427648   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:41.505830   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:41.505877   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:40.549691   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.549838   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:44.647149   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.146894   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:45.122344   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.122706   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:44.050902   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:44.065277   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:44.065343   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:44.101089   67282 cri.go:89] found id: ""
	I1004 04:26:44.101110   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.101117   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:44.101123   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:44.101174   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:44.138570   67282 cri.go:89] found id: ""
	I1004 04:26:44.138593   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.138601   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:44.138606   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:44.138650   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:44.178423   67282 cri.go:89] found id: ""
	I1004 04:26:44.178456   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.178478   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:44.178486   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:44.178556   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:44.213301   67282 cri.go:89] found id: ""
	I1004 04:26:44.213330   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.213338   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:44.213344   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:44.213401   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:44.247653   67282 cri.go:89] found id: ""
	I1004 04:26:44.247681   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.247688   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:44.247694   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:44.247756   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:44.281667   67282 cri.go:89] found id: ""
	I1004 04:26:44.281693   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.281704   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:44.281711   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:44.281767   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:44.314637   67282 cri.go:89] found id: ""
	I1004 04:26:44.314667   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.314677   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:44.314684   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:44.314760   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:44.349432   67282 cri.go:89] found id: ""
	I1004 04:26:44.349459   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.349469   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:44.349479   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:44.349492   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:44.397134   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:44.397168   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:44.410708   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:44.410738   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:44.482025   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:44.482049   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:44.482065   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:44.562652   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:44.562699   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:47.101459   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:47.116923   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:47.117020   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:47.153495   67282 cri.go:89] found id: ""
	I1004 04:26:47.153524   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.153534   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:47.153541   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:47.153601   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:47.189976   67282 cri.go:89] found id: ""
	I1004 04:26:47.190004   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.190014   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:47.190023   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:47.190084   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:47.225712   67282 cri.go:89] found id: ""
	I1004 04:26:47.225740   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.225748   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:47.225754   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:47.225800   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:47.261565   67282 cri.go:89] found id: ""
	I1004 04:26:47.261593   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.261603   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:47.261608   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:47.261665   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:47.298152   67282 cri.go:89] found id: ""
	I1004 04:26:47.298204   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.298214   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:47.298223   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:47.298279   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:47.338226   67282 cri.go:89] found id: ""
	I1004 04:26:47.338253   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.338261   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:47.338267   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:47.338320   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:47.378859   67282 cri.go:89] found id: ""
	I1004 04:26:47.378892   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.378902   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:47.378909   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:47.378964   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:47.418161   67282 cri.go:89] found id: ""
	I1004 04:26:47.418186   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.418194   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:47.418203   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:47.418213   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:47.470271   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:47.470311   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:47.484416   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:47.484453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:47.556744   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:47.556767   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:47.556778   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:47.634266   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:47.634299   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:45.050501   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.550072   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:49.147562   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:51.648504   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:49.623375   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:52.122346   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:50.175746   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:50.191850   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:50.191945   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:50.229542   67282 cri.go:89] found id: ""
	I1004 04:26:50.229574   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.229584   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:50.229593   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:50.229655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:50.268401   67282 cri.go:89] found id: ""
	I1004 04:26:50.268432   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.268441   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:50.268449   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:50.268522   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:50.302927   67282 cri.go:89] found id: ""
	I1004 04:26:50.302954   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.302964   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:50.302969   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:50.303029   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:50.336617   67282 cri.go:89] found id: ""
	I1004 04:26:50.336646   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.336656   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:50.336663   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:50.336724   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:50.372871   67282 cri.go:89] found id: ""
	I1004 04:26:50.372901   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.372911   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:50.372918   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:50.372977   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:50.409601   67282 cri.go:89] found id: ""
	I1004 04:26:50.409629   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.409640   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:50.409648   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:50.409723   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:50.451899   67282 cri.go:89] found id: ""
	I1004 04:26:50.451927   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.451935   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:50.451940   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:50.451991   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:50.487306   67282 cri.go:89] found id: ""
	I1004 04:26:50.487332   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.487343   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:50.487353   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:50.487369   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:50.565167   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:50.565192   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:50.565207   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:50.646155   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:50.646194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:50.688459   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:50.688489   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:50.742416   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:50.742460   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:53.257063   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:53.270546   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:53.270618   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:53.306504   67282 cri.go:89] found id: ""
	I1004 04:26:53.306530   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.306538   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:53.306544   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:53.306594   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:53.343256   67282 cri.go:89] found id: ""
	I1004 04:26:53.343285   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.343293   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:53.343299   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:53.343352   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:53.380834   67282 cri.go:89] found id: ""
	I1004 04:26:53.380864   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.380873   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:53.380880   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:53.380940   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:53.417361   67282 cri.go:89] found id: ""
	I1004 04:26:53.417391   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.417404   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:53.417415   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:53.417479   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:53.451948   67282 cri.go:89] found id: ""
	I1004 04:26:53.451970   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.451978   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:53.451983   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:53.452039   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:53.487731   67282 cri.go:89] found id: ""
	I1004 04:26:53.487756   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.487764   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:53.487769   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:53.487836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:50.049952   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:52.050275   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:54.151420   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.647593   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:54.122386   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.623398   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:53.531549   67282 cri.go:89] found id: ""
	I1004 04:26:53.531573   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.531582   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:53.531587   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:53.531643   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:53.578123   67282 cri.go:89] found id: ""
	I1004 04:26:53.578151   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.578162   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:53.578180   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:53.578195   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:53.643062   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:53.643093   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:53.696157   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:53.696194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:53.709884   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:53.709910   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:53.791272   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:53.791297   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:53.791314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:56.371608   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:56.386293   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:56.386376   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:56.425531   67282 cri.go:89] found id: ""
	I1004 04:26:56.425560   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.425571   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:56.425578   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:56.425646   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:56.470293   67282 cri.go:89] found id: ""
	I1004 04:26:56.470326   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.470335   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:56.470340   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:56.470400   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:56.508927   67282 cri.go:89] found id: ""
	I1004 04:26:56.508955   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.508963   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:56.508968   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:56.509018   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:56.549149   67282 cri.go:89] found id: ""
	I1004 04:26:56.549178   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.549191   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:56.549199   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:56.549270   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:56.589412   67282 cri.go:89] found id: ""
	I1004 04:26:56.589441   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.589451   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:56.589459   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:56.589517   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:56.624732   67282 cri.go:89] found id: ""
	I1004 04:26:56.624760   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.624770   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:56.624776   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:56.624838   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:56.662385   67282 cri.go:89] found id: ""
	I1004 04:26:56.662413   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.662421   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:56.662427   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:56.662483   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:56.697982   67282 cri.go:89] found id: ""
	I1004 04:26:56.698014   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.698025   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:56.698036   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:56.698049   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:56.750597   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:56.750633   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:56.764884   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:56.764921   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:56.844404   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:56.844433   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:56.844451   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:56.924373   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:56.924406   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:54.548706   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.549763   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.049294   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:58.648470   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:01.146948   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.148357   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.123321   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:01.622391   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.466449   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:59.481897   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:59.481972   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:59.535384   67282 cri.go:89] found id: ""
	I1004 04:26:59.535411   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.535422   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:59.535428   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:59.535486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:59.595843   67282 cri.go:89] found id: ""
	I1004 04:26:59.595875   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.595886   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:59.595894   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:59.595954   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:59.641010   67282 cri.go:89] found id: ""
	I1004 04:26:59.641041   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.641049   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:59.641057   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:59.641102   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:59.679705   67282 cri.go:89] found id: ""
	I1004 04:26:59.679736   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.679746   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:59.679753   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:59.679828   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:59.715960   67282 cri.go:89] found id: ""
	I1004 04:26:59.715985   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.715993   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:59.715998   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:59.716047   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:59.757406   67282 cri.go:89] found id: ""
	I1004 04:26:59.757442   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.757453   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:59.757461   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:59.757528   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:59.792038   67282 cri.go:89] found id: ""
	I1004 04:26:59.792066   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.792076   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:59.792083   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:59.792141   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:59.830258   67282 cri.go:89] found id: ""
	I1004 04:26:59.830281   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.830289   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:59.830296   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:59.830308   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:59.877273   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:59.877304   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:59.932570   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:59.932610   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:59.945896   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:59.945919   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:00.020363   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:00.020392   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:00.020412   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:02.601022   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:02.615039   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:02.615112   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:02.654541   67282 cri.go:89] found id: ""
	I1004 04:27:02.654567   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.654574   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:02.654579   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:02.654638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:02.691313   67282 cri.go:89] found id: ""
	I1004 04:27:02.691338   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.691349   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:02.691355   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:02.691414   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:02.735337   67282 cri.go:89] found id: ""
	I1004 04:27:02.735367   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.735376   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:02.735383   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:02.735486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:02.769604   67282 cri.go:89] found id: ""
	I1004 04:27:02.769628   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.769638   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:02.769643   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:02.769704   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:02.812913   67282 cri.go:89] found id: ""
	I1004 04:27:02.812938   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.812949   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:02.812954   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:02.813020   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:02.849910   67282 cri.go:89] found id: ""
	I1004 04:27:02.849939   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.849949   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:02.849956   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:02.850023   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:02.889467   67282 cri.go:89] found id: ""
	I1004 04:27:02.889497   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.889509   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:02.889517   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:02.889575   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:02.928508   67282 cri.go:89] found id: ""
	I1004 04:27:02.928529   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.928537   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:02.928545   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:02.928556   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:02.942783   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:02.942821   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:03.018282   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:03.018304   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:03.018314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:03.101588   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:03.101622   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:03.149911   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:03.149937   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:01.051581   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.550066   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.646200   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:07.648479   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.622932   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.623005   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.121151   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.703125   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:05.717243   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:05.717303   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:05.752564   67282 cri.go:89] found id: ""
	I1004 04:27:05.752588   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.752597   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:05.752609   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:05.752656   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:05.786955   67282 cri.go:89] found id: ""
	I1004 04:27:05.786983   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.786994   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:05.787001   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:05.787073   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:05.823848   67282 cri.go:89] found id: ""
	I1004 04:27:05.823882   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.823893   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:05.823901   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:05.823970   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:05.866192   67282 cri.go:89] found id: ""
	I1004 04:27:05.866220   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.866238   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:05.866246   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:05.866305   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:05.904051   67282 cri.go:89] found id: ""
	I1004 04:27:05.904078   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.904089   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:05.904096   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:05.904154   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:05.940041   67282 cri.go:89] found id: ""
	I1004 04:27:05.940075   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.940085   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:05.940092   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:05.940158   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:05.975758   67282 cri.go:89] found id: ""
	I1004 04:27:05.975799   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.975810   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:05.975818   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:05.975892   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:06.011044   67282 cri.go:89] found id: ""
	I1004 04:27:06.011086   67282 logs.go:282] 0 containers: []
	W1004 04:27:06.011096   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:06.011105   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:06.011116   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:06.024900   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:06.024937   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:06.109932   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:06.109960   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:06.109976   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:06.189517   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:06.189557   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:06.230019   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:06.230048   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:06.050004   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.548768   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:10.147814   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.646430   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:10.122097   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.123967   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.785355   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:08.799156   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:08.799218   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:08.843606   67282 cri.go:89] found id: ""
	I1004 04:27:08.843634   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.843643   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:08.843648   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:08.843698   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:08.884418   67282 cri.go:89] found id: ""
	I1004 04:27:08.884443   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.884450   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:08.884456   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:08.884503   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:08.925878   67282 cri.go:89] found id: ""
	I1004 04:27:08.925906   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.925914   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:08.925920   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:08.925970   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:08.966127   67282 cri.go:89] found id: ""
	I1004 04:27:08.966157   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.966167   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:08.966173   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:08.966227   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:09.010646   67282 cri.go:89] found id: ""
	I1004 04:27:09.010672   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.010682   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:09.010702   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:09.010769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:09.049738   67282 cri.go:89] found id: ""
	I1004 04:27:09.049761   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.049768   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:09.049774   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:09.049825   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:09.082709   67282 cri.go:89] found id: ""
	I1004 04:27:09.082739   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.082747   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:09.082752   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:09.082808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:09.120574   67282 cri.go:89] found id: ""
	I1004 04:27:09.120605   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.120617   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:09.120626   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:09.120636   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:09.202880   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:09.202922   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:09.242668   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:09.242700   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:09.298662   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:09.298703   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:09.314832   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:09.314868   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:09.389062   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:11.889645   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:11.902953   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:11.903012   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:11.939846   67282 cri.go:89] found id: ""
	I1004 04:27:11.939874   67282 logs.go:282] 0 containers: []
	W1004 04:27:11.939882   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:11.939888   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:11.939936   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:11.975281   67282 cri.go:89] found id: ""
	I1004 04:27:11.975303   67282 logs.go:282] 0 containers: []
	W1004 04:27:11.975311   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:11.975317   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:11.975370   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:12.011400   67282 cri.go:89] found id: ""
	I1004 04:27:12.011428   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.011438   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:12.011443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:12.011506   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:12.046862   67282 cri.go:89] found id: ""
	I1004 04:27:12.046889   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.046898   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:12.046905   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:12.046960   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:12.081537   67282 cri.go:89] found id: ""
	I1004 04:27:12.081569   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.081581   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:12.081590   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:12.081655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:12.121982   67282 cri.go:89] found id: ""
	I1004 04:27:12.122010   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.122021   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:12.122028   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:12.122086   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:12.161419   67282 cri.go:89] found id: ""
	I1004 04:27:12.161460   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.161473   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:12.161481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:12.161549   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:12.202188   67282 cri.go:89] found id: ""
	I1004 04:27:12.202230   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.202242   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:12.202253   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:12.202267   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:12.253424   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:12.253462   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:12.268116   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:12.268141   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:12.337788   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:12.337814   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:12.337826   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:12.417359   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:12.417395   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:10.549097   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.549239   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.647267   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:17.147526   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.623050   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:16.623702   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.959596   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:14.973031   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:14.973090   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:15.011451   67282 cri.go:89] found id: ""
	I1004 04:27:15.011487   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.011497   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:15.011513   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:15.011572   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:15.055767   67282 cri.go:89] found id: ""
	I1004 04:27:15.055817   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.055829   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:15.055836   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:15.055915   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:15.096357   67282 cri.go:89] found id: ""
	I1004 04:27:15.096385   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.096394   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:15.096399   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:15.096456   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:15.131824   67282 cri.go:89] found id: ""
	I1004 04:27:15.131853   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.131863   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:15.131870   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:15.131932   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:15.169250   67282 cri.go:89] found id: ""
	I1004 04:27:15.169285   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.169299   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:15.169307   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:15.169373   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:15.206852   67282 cri.go:89] found id: ""
	I1004 04:27:15.206881   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.206889   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:15.206895   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:15.206949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:15.241392   67282 cri.go:89] found id: ""
	I1004 04:27:15.241421   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.241431   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:15.241439   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:15.241498   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:15.280697   67282 cri.go:89] found id: ""
	I1004 04:27:15.280723   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.280734   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:15.280744   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:15.280758   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:15.361681   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:15.361716   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:15.404640   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:15.404676   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:15.457287   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:15.457326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:15.471162   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:15.471188   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:15.544157   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:18.045094   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:18.060228   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:18.060310   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:18.096659   67282 cri.go:89] found id: ""
	I1004 04:27:18.096688   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.096697   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:18.096703   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:18.096757   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:18.135538   67282 cri.go:89] found id: ""
	I1004 04:27:18.135565   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.135573   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:18.135579   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:18.135629   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:18.171051   67282 cri.go:89] found id: ""
	I1004 04:27:18.171082   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.171098   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:18.171106   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:18.171168   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:18.205696   67282 cri.go:89] found id: ""
	I1004 04:27:18.205725   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.205735   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:18.205742   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:18.205803   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:18.240545   67282 cri.go:89] found id: ""
	I1004 04:27:18.240566   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.240576   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:18.240584   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:18.240638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:18.279185   67282 cri.go:89] found id: ""
	I1004 04:27:18.279221   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.279232   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:18.279239   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:18.279310   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:18.318395   67282 cri.go:89] found id: ""
	I1004 04:27:18.318417   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.318424   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:18.318430   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:18.318476   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:18.352367   67282 cri.go:89] found id: ""
	I1004 04:27:18.352390   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.352398   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:18.352407   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:18.352420   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:18.365604   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:18.365637   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:18.438407   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:18.438427   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:18.438438   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:14.549690   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:16.550244   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:18.550355   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:19.647031   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:22.147826   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:19.126090   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:21.623910   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:18.513645   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:18.513679   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:18.557224   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:18.557250   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:21.111005   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:21.126573   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:21.126631   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:21.161161   67282 cri.go:89] found id: ""
	I1004 04:27:21.161190   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.161201   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:21.161207   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:21.161258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:21.199517   67282 cri.go:89] found id: ""
	I1004 04:27:21.199544   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.199555   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:21.199562   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:21.199625   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:21.236210   67282 cri.go:89] found id: ""
	I1004 04:27:21.236238   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.236246   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:21.236251   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:21.236311   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:21.272720   67282 cri.go:89] found id: ""
	I1004 04:27:21.272746   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.272753   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:21.272759   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:21.272808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:21.311439   67282 cri.go:89] found id: ""
	I1004 04:27:21.311474   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.311484   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:21.311491   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:21.311551   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:21.360400   67282 cri.go:89] found id: ""
	I1004 04:27:21.360427   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.360436   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:21.360443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:21.360511   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:21.394627   67282 cri.go:89] found id: ""
	I1004 04:27:21.394656   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.394667   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:21.394673   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:21.394721   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:21.429736   67282 cri.go:89] found id: ""
	I1004 04:27:21.429762   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.429770   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:21.429778   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:21.429789   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:21.482773   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:21.482808   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:21.497570   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:21.497595   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:21.582335   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:21.582355   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:21.582367   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:21.662196   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:21.662230   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:21.050000   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:23.050516   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.647074   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:27.147999   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.123142   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:26.624049   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.205743   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:24.222878   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:24.222951   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:24.263410   67282 cri.go:89] found id: ""
	I1004 04:27:24.263450   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.263462   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:24.263469   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:24.263532   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:24.306892   67282 cri.go:89] found id: ""
	I1004 04:27:24.306923   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.306934   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:24.306941   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:24.307008   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:24.345522   67282 cri.go:89] found id: ""
	I1004 04:27:24.345559   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.345571   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:24.345579   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:24.345638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:24.384893   67282 cri.go:89] found id: ""
	I1004 04:27:24.384918   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.384925   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:24.384931   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:24.384978   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:24.420998   67282 cri.go:89] found id: ""
	I1004 04:27:24.421025   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.421036   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:24.421043   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:24.421105   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:24.456277   67282 cri.go:89] found id: ""
	I1004 04:27:24.456305   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.456315   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:24.456322   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:24.456383   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:24.497852   67282 cri.go:89] found id: ""
	I1004 04:27:24.497881   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.497892   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:24.497900   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:24.497960   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:24.538702   67282 cri.go:89] found id: ""
	I1004 04:27:24.538736   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.538755   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:24.538766   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:24.538778   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:24.553747   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:24.553773   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:24.638059   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:24.638081   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:24.638093   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:24.718165   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:24.718212   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:24.759770   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:24.759811   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:27.311684   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:27.327493   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:27.327570   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:27.362804   67282 cri.go:89] found id: ""
	I1004 04:27:27.362827   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.362836   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:27.362841   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:27.362888   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:27.401576   67282 cri.go:89] found id: ""
	I1004 04:27:27.401604   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.401614   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:27.401621   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:27.401682   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:27.445152   67282 cri.go:89] found id: ""
	I1004 04:27:27.445177   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.445187   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:27.445193   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:27.445240   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:27.482710   67282 cri.go:89] found id: ""
	I1004 04:27:27.482734   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.482742   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:27.482749   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:27.482808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:27.519459   67282 cri.go:89] found id: ""
	I1004 04:27:27.519488   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.519498   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:27.519505   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:27.519569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:27.559381   67282 cri.go:89] found id: ""
	I1004 04:27:27.559407   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.559417   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:27.559423   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:27.559468   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:27.609040   67282 cri.go:89] found id: ""
	I1004 04:27:27.609068   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.609076   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:27.609081   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:27.609128   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:27.654537   67282 cri.go:89] found id: ""
	I1004 04:27:27.654569   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.654579   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:27.654590   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:27.654603   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:27.709062   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:27.709098   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:27.722931   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:27.722955   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:27.796863   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:27.796884   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:27.796895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:27.879840   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:27.879876   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:25.549643   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:27.551373   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:29.646879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:31.646956   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:29.122087   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:31.122774   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:30.423644   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:30.439256   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:30.439311   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:30.479612   67282 cri.go:89] found id: ""
	I1004 04:27:30.479640   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.479648   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:30.479654   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:30.479750   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:30.522846   67282 cri.go:89] found id: ""
	I1004 04:27:30.522879   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.522890   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:30.522898   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:30.522946   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:30.558935   67282 cri.go:89] found id: ""
	I1004 04:27:30.558962   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.558971   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:30.558976   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:30.559032   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:30.603383   67282 cri.go:89] found id: ""
	I1004 04:27:30.603411   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.603421   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:30.603428   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:30.603492   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:30.644700   67282 cri.go:89] found id: ""
	I1004 04:27:30.644727   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.644737   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:30.644744   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:30.644799   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:30.680328   67282 cri.go:89] found id: ""
	I1004 04:27:30.680358   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.680367   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:30.680372   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:30.680419   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:30.717973   67282 cri.go:89] found id: ""
	I1004 04:27:30.717995   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.718005   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:30.718021   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:30.718082   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:30.755838   67282 cri.go:89] found id: ""
	I1004 04:27:30.755866   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.755874   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:30.755882   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:30.755893   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:30.809999   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:30.810036   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:30.824447   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:30.824491   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:30.902008   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:30.902030   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:30.902043   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:30.986938   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:30.986984   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:30.049983   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:32.050033   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:34.050671   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.647707   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:36.146619   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.624575   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:36.122046   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.531108   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:33.546681   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:33.546759   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:33.586444   67282 cri.go:89] found id: ""
	I1004 04:27:33.586469   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.586479   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:33.586486   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:33.586552   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:33.629340   67282 cri.go:89] found id: ""
	I1004 04:27:33.629365   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.629373   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:33.629378   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:33.629429   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:33.668446   67282 cri.go:89] found id: ""
	I1004 04:27:33.668473   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.668483   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:33.668490   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:33.668548   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:33.706287   67282 cri.go:89] found id: ""
	I1004 04:27:33.706312   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.706320   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:33.706327   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:33.706385   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:33.746161   67282 cri.go:89] found id: ""
	I1004 04:27:33.746189   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.746200   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:33.746207   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:33.746270   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:33.782157   67282 cri.go:89] found id: ""
	I1004 04:27:33.782184   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.782194   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:33.782200   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:33.782262   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:33.820332   67282 cri.go:89] found id: ""
	I1004 04:27:33.820361   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.820371   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:33.820378   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:33.820437   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:33.859431   67282 cri.go:89] found id: ""
	I1004 04:27:33.859458   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.859467   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:33.859475   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:33.859485   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:33.910259   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:33.910292   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:33.925149   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:33.925177   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:34.006153   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:34.006187   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:34.006202   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:34.115882   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:34.115916   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:36.662964   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:36.677071   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:36.677139   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:36.720785   67282 cri.go:89] found id: ""
	I1004 04:27:36.720807   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.720818   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:36.720826   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:36.720875   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:36.757535   67282 cri.go:89] found id: ""
	I1004 04:27:36.757563   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.757574   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:36.757582   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:36.757630   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:36.800989   67282 cri.go:89] found id: ""
	I1004 04:27:36.801024   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.801038   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:36.801046   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:36.801112   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:36.837101   67282 cri.go:89] found id: ""
	I1004 04:27:36.837122   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.837131   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:36.837136   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:36.837181   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:36.876325   67282 cri.go:89] found id: ""
	I1004 04:27:36.876358   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.876370   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:36.876379   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:36.876444   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:36.914720   67282 cri.go:89] found id: ""
	I1004 04:27:36.914749   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.914759   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:36.914767   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:36.914828   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:36.949672   67282 cri.go:89] found id: ""
	I1004 04:27:36.949694   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.949701   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:36.949706   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:36.949754   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:36.983374   67282 cri.go:89] found id: ""
	I1004 04:27:36.983406   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.983416   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:36.983427   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:36.983440   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:37.039040   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:37.039075   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:37.054873   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:37.054898   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:37.131537   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:37.131562   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:37.131577   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:37.213958   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:37.213990   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:36.548751   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:39.049804   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:38.646028   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:40.646213   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:42.648505   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:38.623560   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:40.623721   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:43.122033   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:39.754264   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:39.771465   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:39.771545   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:39.829530   67282 cri.go:89] found id: ""
	I1004 04:27:39.829560   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.829572   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:39.829580   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:39.829639   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:39.876055   67282 cri.go:89] found id: ""
	I1004 04:27:39.876078   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.876090   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:39.876095   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:39.876142   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:39.913304   67282 cri.go:89] found id: ""
	I1004 04:27:39.913327   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.913335   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:39.913340   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:39.913389   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:39.948821   67282 cri.go:89] found id: ""
	I1004 04:27:39.948847   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.948855   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:39.948862   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:39.948916   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:39.986994   67282 cri.go:89] found id: ""
	I1004 04:27:39.987023   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.987034   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:39.987041   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:39.987141   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:40.026627   67282 cri.go:89] found id: ""
	I1004 04:27:40.026656   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.026668   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:40.026675   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:40.026734   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:40.067028   67282 cri.go:89] found id: ""
	I1004 04:27:40.067068   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.067079   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:40.067086   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:40.067144   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:40.105638   67282 cri.go:89] found id: ""
	I1004 04:27:40.105667   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.105677   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:40.105694   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:40.105707   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:40.159425   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:40.159467   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:40.175045   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:40.175073   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:40.261967   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:40.261989   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:40.262002   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:40.345317   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:40.345354   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:42.888115   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:42.901889   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:42.901948   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:42.938556   67282 cri.go:89] found id: ""
	I1004 04:27:42.938587   67282 logs.go:282] 0 containers: []
	W1004 04:27:42.938597   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:42.938604   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:42.938668   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:42.974569   67282 cri.go:89] found id: ""
	I1004 04:27:42.974595   67282 logs.go:282] 0 containers: []
	W1004 04:27:42.974606   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:42.974613   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:42.974679   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:43.010552   67282 cri.go:89] found id: ""
	I1004 04:27:43.010581   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.010593   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:43.010600   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:43.010655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:43.046204   67282 cri.go:89] found id: ""
	I1004 04:27:43.046237   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.046247   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:43.046254   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:43.046313   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:43.081612   67282 cri.go:89] found id: ""
	I1004 04:27:43.081644   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.081655   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:43.081662   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:43.081729   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:43.121103   67282 cri.go:89] found id: ""
	I1004 04:27:43.121126   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.121133   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:43.121139   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:43.121191   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:43.157104   67282 cri.go:89] found id: ""
	I1004 04:27:43.157128   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.157136   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:43.157141   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:43.157196   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:43.198927   67282 cri.go:89] found id: ""
	I1004 04:27:43.198951   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.198958   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:43.198966   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:43.198975   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:43.254534   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:43.254563   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:43.268106   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:43.268130   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:43.344382   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:43.344410   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:43.344425   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:43.426916   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:43.426948   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:41.549364   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:43.549590   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.146452   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:47.148300   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.126135   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:47.622568   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.966806   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:45.980187   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:45.980252   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:46.014196   67282 cri.go:89] found id: ""
	I1004 04:27:46.014220   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.014228   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:46.014233   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:46.014295   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:46.053910   67282 cri.go:89] found id: ""
	I1004 04:27:46.053940   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.053951   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:46.053957   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:46.054013   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:46.087896   67282 cri.go:89] found id: ""
	I1004 04:27:46.087921   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.087930   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:46.087936   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:46.087985   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:46.123441   67282 cri.go:89] found id: ""
	I1004 04:27:46.123465   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.123475   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:46.123481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:46.123545   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:46.159664   67282 cri.go:89] found id: ""
	I1004 04:27:46.159688   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.159698   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:46.159704   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:46.159761   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:46.195474   67282 cri.go:89] found id: ""
	I1004 04:27:46.195501   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.195512   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:46.195525   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:46.195569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:46.228670   67282 cri.go:89] found id: ""
	I1004 04:27:46.228693   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.228701   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:46.228706   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:46.228759   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:46.265278   67282 cri.go:89] found id: ""
	I1004 04:27:46.265303   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.265311   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:46.265325   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:46.265338   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:46.315135   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:46.315163   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:46.327765   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:46.327797   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:46.393157   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:46.393173   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:46.393184   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:46.473026   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:46.473058   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:46.049285   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:48.549053   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:49.647027   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:52.146841   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:50.122921   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:52.622913   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:49.011972   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:49.025718   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:49.025783   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:49.062749   67282 cri.go:89] found id: ""
	I1004 04:27:49.062774   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.062782   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:49.062788   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:49.062844   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:49.100838   67282 cri.go:89] found id: ""
	I1004 04:27:49.100886   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.100897   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:49.100904   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:49.100961   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:49.139966   67282 cri.go:89] found id: ""
	I1004 04:27:49.139990   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.140000   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:49.140007   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:49.140088   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:49.179347   67282 cri.go:89] found id: ""
	I1004 04:27:49.179373   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.179384   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:49.179391   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:49.179435   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:49.218086   67282 cri.go:89] found id: ""
	I1004 04:27:49.218112   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.218121   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:49.218127   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:49.218181   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:49.254779   67282 cri.go:89] found id: ""
	I1004 04:27:49.254811   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.254823   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:49.254830   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:49.254888   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:49.287351   67282 cri.go:89] found id: ""
	I1004 04:27:49.287381   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.287392   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:49.287398   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:49.287456   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:49.320051   67282 cri.go:89] found id: ""
	I1004 04:27:49.320078   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.320089   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:49.320100   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:49.320112   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:49.371270   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:49.371300   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:49.384403   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:49.384432   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:49.468132   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:49.468154   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:49.468167   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:49.543179   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:49.543211   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:52.093235   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:52.108446   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:52.108520   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:52.147590   67282 cri.go:89] found id: ""
	I1004 04:27:52.147613   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.147620   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:52.147626   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:52.147677   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:52.183066   67282 cri.go:89] found id: ""
	I1004 04:27:52.183095   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.183105   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:52.183112   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:52.183170   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:52.223109   67282 cri.go:89] found id: ""
	I1004 04:27:52.223140   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.223154   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:52.223165   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:52.223223   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:52.259547   67282 cri.go:89] found id: ""
	I1004 04:27:52.259573   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.259582   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:52.259587   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:52.259638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:52.296934   67282 cri.go:89] found id: ""
	I1004 04:27:52.296961   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.296971   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:52.296979   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:52.297040   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:52.331650   67282 cri.go:89] found id: ""
	I1004 04:27:52.331671   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.331679   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:52.331684   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:52.331728   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:52.365111   67282 cri.go:89] found id: ""
	I1004 04:27:52.365139   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.365150   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:52.365157   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:52.365239   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:52.400974   67282 cri.go:89] found id: ""
	I1004 04:27:52.401010   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.401023   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:52.401035   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:52.401049   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:52.484732   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:52.484771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:52.523322   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:52.523348   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:52.576671   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:52.576702   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:52.590263   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:52.590291   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:52.666646   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:50.549475   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:53.049259   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:54.646262   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.153196   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:55.123174   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.123932   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:55.166856   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:55.181481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:55.181562   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:55.218023   67282 cri.go:89] found id: ""
	I1004 04:27:55.218048   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.218056   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:55.218063   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:55.218121   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:55.256439   67282 cri.go:89] found id: ""
	I1004 04:27:55.256464   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.256472   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:55.256477   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:55.256531   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:55.294563   67282 cri.go:89] found id: ""
	I1004 04:27:55.294588   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.294596   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:55.294601   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:55.294656   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:55.331266   67282 cri.go:89] found id: ""
	I1004 04:27:55.331290   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.331300   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:55.331306   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:55.331370   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:55.367286   67282 cri.go:89] found id: ""
	I1004 04:27:55.367314   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.367325   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:55.367332   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:55.367391   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:55.402031   67282 cri.go:89] found id: ""
	I1004 04:27:55.402054   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.402062   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:55.402068   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:55.402122   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:55.437737   67282 cri.go:89] found id: ""
	I1004 04:27:55.437764   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.437774   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:55.437780   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:55.437842   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:55.470654   67282 cri.go:89] found id: ""
	I1004 04:27:55.470692   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.470704   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:55.470713   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:55.470726   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:55.521364   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:55.521393   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:55.534691   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:55.534716   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:55.600902   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:55.600923   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:55.600933   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:55.678896   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:55.678940   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:58.220086   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:58.234049   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:58.234110   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:58.281112   67282 cri.go:89] found id: ""
	I1004 04:27:58.281135   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.281143   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:58.281148   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:58.281191   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:58.320549   67282 cri.go:89] found id: ""
	I1004 04:27:58.320575   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.320584   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:58.320589   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:58.320635   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:58.355139   67282 cri.go:89] found id: ""
	I1004 04:27:58.355166   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.355174   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:58.355179   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:58.355225   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:58.387809   67282 cri.go:89] found id: ""
	I1004 04:27:58.387836   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.387846   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:58.387851   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:58.387908   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:58.420264   67282 cri.go:89] found id: ""
	I1004 04:27:58.420287   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.420295   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:58.420300   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:58.420349   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:58.455409   67282 cri.go:89] found id: ""
	I1004 04:27:58.455431   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.455438   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:58.455443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:58.455487   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:58.488708   67282 cri.go:89] found id: ""
	I1004 04:27:58.488734   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.488742   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:58.488749   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:58.488797   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:55.051622   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.548584   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:59.646699   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:01.648277   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:59.623008   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:02.122303   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:58.522139   67282 cri.go:89] found id: ""
	I1004 04:27:58.522161   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.522169   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:58.522176   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:58.522187   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:58.604653   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:58.604683   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:58.645141   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:58.645169   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:58.699716   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:58.699748   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:58.713197   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:58.713228   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:58.781998   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:01.282429   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:01.297266   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:01.297343   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:01.330421   67282 cri.go:89] found id: ""
	I1004 04:28:01.330446   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.330454   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:28:01.330459   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:01.330514   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:01.366960   67282 cri.go:89] found id: ""
	I1004 04:28:01.366983   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.366992   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:28:01.366998   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:01.367067   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:01.400886   67282 cri.go:89] found id: ""
	I1004 04:28:01.400910   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.400920   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:28:01.400931   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:01.400987   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:01.435556   67282 cri.go:89] found id: ""
	I1004 04:28:01.435586   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.435594   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:28:01.435601   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:01.435649   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:01.475772   67282 cri.go:89] found id: ""
	I1004 04:28:01.475810   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.475820   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:28:01.475826   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:01.475884   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:01.512380   67282 cri.go:89] found id: ""
	I1004 04:28:01.512403   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.512411   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:28:01.512417   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:01.512465   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:01.550488   67282 cri.go:89] found id: ""
	I1004 04:28:01.550517   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.550528   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:01.550536   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:28:01.550595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:28:01.586216   67282 cri.go:89] found id: ""
	I1004 04:28:01.586249   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.586261   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:28:01.586271   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:01.586285   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:01.640819   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:01.640860   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:01.656990   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:01.657020   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:28:01.731326   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:01.731354   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:01.731368   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:01.810007   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:28:01.810044   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:59.548748   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:01.551035   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.043116   66755 pod_ready.go:82] duration metric: took 4m0.000354814s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" ...
	E1004 04:28:04.043143   66755 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" (will not retry!)
	I1004 04:28:04.043167   66755 pod_ready.go:39] duration metric: took 4m15.403862245s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:04.043219   66755 kubeadm.go:597] duration metric: took 4m23.226496183s to restartPrimaryControlPlane
	W1004 04:28:04.043288   66755 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1004 04:28:04.043316   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:28:04.146697   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:06.147038   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:08.147201   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.122463   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:06.622379   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.352648   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:04.366150   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:04.366227   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:04.403272   67282 cri.go:89] found id: ""
	I1004 04:28:04.403298   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.403308   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:28:04.403315   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:04.403371   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:04.439237   67282 cri.go:89] found id: ""
	I1004 04:28:04.439269   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.439280   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:28:04.439287   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:04.439345   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:04.475532   67282 cri.go:89] found id: ""
	I1004 04:28:04.475558   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.475569   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:28:04.475576   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:04.475638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:04.511738   67282 cri.go:89] found id: ""
	I1004 04:28:04.511765   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.511775   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:28:04.511792   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:04.511850   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:04.553536   67282 cri.go:89] found id: ""
	I1004 04:28:04.553561   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.553568   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:28:04.553574   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:04.553625   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:04.589016   67282 cri.go:89] found id: ""
	I1004 04:28:04.589044   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.589053   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:28:04.589058   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:04.589106   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:04.622780   67282 cri.go:89] found id: ""
	I1004 04:28:04.622808   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.622817   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:04.622823   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:28:04.622879   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:28:04.662620   67282 cri.go:89] found id: ""
	I1004 04:28:04.662641   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.662649   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:28:04.662659   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:04.662669   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:04.717894   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:04.717928   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:04.732353   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:04.732385   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:28:04.806443   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:04.806469   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:04.806492   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:04.887684   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:28:04.887717   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:07.426630   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:07.440242   67282 kubeadm.go:597] duration metric: took 4m3.475062199s to restartPrimaryControlPlane
	W1004 04:28:07.440318   67282 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1004 04:28:07.440346   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:28:08.147532   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:08.162175   67282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:28:08.172013   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:28:08.181741   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:28:08.181757   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:28:08.181801   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:28:08.191002   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:28:08.191046   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:28:08.200929   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:28:08.210241   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:28:08.210286   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:28:08.219693   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:28:08.229497   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:28:08.229534   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:28:08.239583   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:28:08.249207   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:28:08.249252   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:28:08.258516   67282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:28:08.328054   67282 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:28:08.328132   67282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:28:08.472265   67282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:28:08.472420   67282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:28:08.472543   67282 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:28:08.655873   67282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:28:08.657726   67282 out.go:235]   - Generating certificates and keys ...
	I1004 04:28:08.657817   67282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:28:08.657876   67282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:28:08.657942   67282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:28:08.658034   67282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:28:08.658149   67282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:28:08.658235   67282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:28:08.658309   67282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:28:08.658396   67282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:28:08.658503   67282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:28:08.658600   67282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:28:08.658651   67282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:28:08.658707   67282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:28:08.706486   67282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:28:08.909036   67282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:28:09.285968   67282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:28:09.499963   67282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:28:09.516914   67282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:28:09.517832   67282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:28:09.517900   67282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:28:09.664925   67282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:28:10.147391   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:12.646012   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:09.121686   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:11.123086   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:13.123578   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:09.666691   67282 out.go:235]   - Booting up control plane ...
	I1004 04:28:09.666889   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:28:09.671298   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:28:09.672046   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:28:09.672956   67282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:28:09.685069   67282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:28:14.646614   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:16.646683   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:15.125374   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:17.125685   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:18.646777   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:21.147299   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:19.623872   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:22.123077   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:23.646460   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:25.647096   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:28.147324   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:24.623730   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:27.123516   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:30.379460   66755 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.336110507s)
	I1004 04:28:30.379544   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:30.395622   66755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:28:30.406790   66755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:28:30.417380   66755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:28:30.417408   66755 kubeadm.go:157] found existing configuration files:
	
	I1004 04:28:30.417458   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:28:30.427925   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:28:30.427993   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:28:30.438694   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:28:30.448898   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:28:30.448972   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:28:30.459463   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:28:30.469227   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:28:30.469281   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:28:30.479979   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:28:30.489873   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:28:30.489936   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:28:30.499999   66755 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:28:30.549707   66755 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 04:28:30.549771   66755 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:28:30.663468   66755 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:28:30.663595   66755 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:28:30.663698   66755 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 04:28:30.675750   66755 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:28:30.677655   66755 out.go:235]   - Generating certificates and keys ...
	I1004 04:28:30.677760   66755 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:28:30.677868   66755 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:28:30.678010   66755 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:28:30.678102   66755 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:28:30.678217   66755 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:28:30.678289   66755 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:28:30.678378   66755 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:28:30.678470   66755 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:28:30.678566   66755 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:28:30.678732   66755 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:28:30.679295   66755 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:28:30.679383   66755 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:28:30.826979   66755 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:28:30.900919   66755 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 04:28:31.098221   66755 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:28:31.243668   66755 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:28:31.411766   66755 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:28:31.412181   66755 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:28:31.414652   66755 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:28:30.646927   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:32.647767   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:29.129148   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:31.623284   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:31.416504   66755 out.go:235]   - Booting up control plane ...
	I1004 04:28:31.416620   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:28:31.416730   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:28:31.418284   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:28:31.437379   66755 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:28:31.443450   66755 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:28:31.443505   66755 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:28:31.586540   66755 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 04:28:31.586706   66755 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 04:28:32.088382   66755 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.195244ms
	I1004 04:28:32.088510   66755 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 04:28:37.090291   66755 kubeadm.go:310] [api-check] The API server is healthy after 5.001756025s
	I1004 04:28:37.103845   66755 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 04:28:37.127230   66755 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 04:28:37.156917   66755 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 04:28:37.157181   66755 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-934812 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 04:28:37.171399   66755 kubeadm.go:310] [bootstrap-token] Using token: 1wt5ey.lvccf2aeyngf9mt3
	I1004 04:28:34.648249   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:37.148680   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:33.623901   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:36.122762   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:38.123147   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:37.172939   66755 out.go:235]   - Configuring RBAC rules ...
	I1004 04:28:37.173086   66755 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 04:28:37.179454   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 04:28:37.188765   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 04:28:37.192599   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 04:28:37.200359   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 04:28:37.204872   66755 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 04:28:37.498753   66755 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 04:28:37.931621   66755 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 04:28:38.497855   66755 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 04:28:38.498949   66755 kubeadm.go:310] 
	I1004 04:28:38.499023   66755 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 04:28:38.499055   66755 kubeadm.go:310] 
	I1004 04:28:38.499183   66755 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 04:28:38.499195   66755 kubeadm.go:310] 
	I1004 04:28:38.499229   66755 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 04:28:38.499316   66755 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 04:28:38.499385   66755 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 04:28:38.499393   66755 kubeadm.go:310] 
	I1004 04:28:38.499481   66755 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 04:28:38.499498   66755 kubeadm.go:310] 
	I1004 04:28:38.499563   66755 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 04:28:38.499571   66755 kubeadm.go:310] 
	I1004 04:28:38.499653   66755 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 04:28:38.499742   66755 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 04:28:38.499871   66755 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 04:28:38.499888   66755 kubeadm.go:310] 
	I1004 04:28:38.499994   66755 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 04:28:38.500104   66755 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 04:28:38.500115   66755 kubeadm.go:310] 
	I1004 04:28:38.500220   66755 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1wt5ey.lvccf2aeyngf9mt3 \
	I1004 04:28:38.500350   66755 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 \
	I1004 04:28:38.500387   66755 kubeadm.go:310] 	--control-plane 
	I1004 04:28:38.500402   66755 kubeadm.go:310] 
	I1004 04:28:38.500478   66755 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 04:28:38.500484   66755 kubeadm.go:310] 
	I1004 04:28:38.500563   66755 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1wt5ey.lvccf2aeyngf9mt3 \
	I1004 04:28:38.500686   66755 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 
	I1004 04:28:38.501820   66755 kubeadm.go:310] W1004 04:28:30.522396    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:28:38.502147   66755 kubeadm.go:310] W1004 04:28:30.524006    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:28:38.502282   66755 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:28:38.502311   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:28:38.502321   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:28:38.504185   66755 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:28:38.505600   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:28:38.518746   66755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:28:38.541311   66755 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:28:38.541422   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:38.541460   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-934812 minikube.k8s.io/updated_at=2024_10_04T04_28_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=embed-certs-934812 minikube.k8s.io/primary=true
	I1004 04:28:38.605537   66755 ops.go:34] apiserver oom_adj: -16
	I1004 04:28:38.765084   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:39.646916   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:41.651456   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:39.265365   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:39.765925   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:40.265135   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:40.766204   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:41.265734   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:41.765404   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:42.265993   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:42.765826   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:43.265776   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:43.353243   66755 kubeadm.go:1113] duration metric: took 4.811892444s to wait for elevateKubeSystemPrivileges
	I1004 04:28:43.353288   66755 kubeadm.go:394] duration metric: took 5m2.586827656s to StartCluster
	I1004 04:28:43.353313   66755 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:28:43.353402   66755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:28:43.355058   66755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:28:43.355309   66755 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:28:43.355388   66755 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:28:43.355533   66755 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-934812"
	I1004 04:28:43.355542   66755 addons.go:69] Setting default-storageclass=true in profile "embed-certs-934812"
	I1004 04:28:43.355556   66755 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-934812"
	I1004 04:28:43.355563   66755 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-934812"
	W1004 04:28:43.355568   66755 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:28:43.355584   66755 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:28:43.355598   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.355639   66755 addons.go:69] Setting metrics-server=true in profile "embed-certs-934812"
	I1004 04:28:43.355658   66755 addons.go:234] Setting addon metrics-server=true in "embed-certs-934812"
	W1004 04:28:43.355666   66755 addons.go:243] addon metrics-server should already be in state true
	I1004 04:28:43.355694   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.356024   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356075   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356095   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.356108   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.356075   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356173   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.357087   66755 out.go:177] * Verifying Kubernetes components...
	I1004 04:28:43.358428   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:28:43.373646   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45715
	I1004 04:28:43.373874   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I1004 04:28:43.374344   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.374344   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.374927   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.374948   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.375003   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.375027   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.375285   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.375342   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.375499   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.375884   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.375928   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.376269   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I1004 04:28:43.376636   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.377073   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.377099   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.377455   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.377883   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.377918   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.378402   66755 addons.go:234] Setting addon default-storageclass=true in "embed-certs-934812"
	W1004 04:28:43.378420   66755 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:28:43.378447   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.378705   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.378734   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.394001   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I1004 04:28:43.394289   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I1004 04:28:43.394645   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.394760   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.395195   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.395213   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.395302   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.395317   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.395596   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.395626   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.395842   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.396120   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.396160   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.397590   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.399391   66755 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:28:43.400581   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:28:43.400598   66755 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:28:43.400619   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.405134   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.405778   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I1004 04:28:43.405968   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.405996   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.406230   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.406383   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.406428   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.406571   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.406698   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.406825   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.406847   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.407455   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.407600   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.409278   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.411006   66755 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:28:40.622426   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:42.623400   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:43.412106   66755 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:28:43.412124   66755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:28:43.412389   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.414167   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I1004 04:28:43.414796   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.415285   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.415309   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.415657   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.415710   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.415911   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.416195   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.416217   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.416440   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.416628   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.416759   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.416856   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.418235   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.418426   66755 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:28:43.418436   66755 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:28:43.418456   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.421305   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.421761   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.421779   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.421966   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.422654   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.422789   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.422877   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.580648   66755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:28:43.615728   66755 node_ready.go:35] waiting up to 6m0s for node "embed-certs-934812" to be "Ready" ...
	I1004 04:28:43.625558   66755 node_ready.go:49] node "embed-certs-934812" has status "Ready":"True"
	I1004 04:28:43.625600   66755 node_ready.go:38] duration metric: took 9.827384ms for node "embed-certs-934812" to be "Ready" ...
	I1004 04:28:43.625612   66755 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:43.634425   66755 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:43.748926   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:28:43.774727   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:28:43.781558   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:28:43.781589   66755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:28:43.838039   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:28:43.838067   66755 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:28:43.945364   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:28:43.945392   66755 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:28:44.005000   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:28:44.253491   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.253521   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.253828   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.253896   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.253910   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.253925   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.253938   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.254130   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.254149   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.254164   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.267367   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.267396   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.267680   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.267700   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.864663   66755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089890385s)
	I1004 04:28:44.864722   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.864734   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.865046   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.865070   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.865086   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.865095   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.866872   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.866877   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.866907   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.138868   66755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133828074s)
	I1004 04:28:45.138926   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:45.138942   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:45.139243   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:45.139265   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.139276   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:45.139283   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:45.139484   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:45.139497   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.139507   66755 addons.go:475] Verifying addon metrics-server=true in "embed-certs-934812"
	I1004 04:28:45.141046   66755 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 04:28:44.147013   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:44.648117   67541 pod_ready.go:82] duration metric: took 4m0.007930603s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	E1004 04:28:44.648144   67541 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 04:28:44.648154   67541 pod_ready.go:39] duration metric: took 4m7.419382357s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:44.648170   67541 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:28:44.648200   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:44.648256   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:44.712473   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:44.712500   67541 cri.go:89] found id: ""
	I1004 04:28:44.712510   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:44.712568   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.717619   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:44.717688   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:44.760036   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:44.760061   67541 cri.go:89] found id: ""
	I1004 04:28:44.760071   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:44.760124   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.766402   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:44.766465   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:44.821766   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:44.821792   67541 cri.go:89] found id: ""
	I1004 04:28:44.821801   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:44.821858   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.826315   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:44.826370   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:44.873526   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:44.873547   67541 cri.go:89] found id: ""
	I1004 04:28:44.873556   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:44.873615   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.878375   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:44.878442   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:44.920240   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:44.920261   67541 cri.go:89] found id: ""
	I1004 04:28:44.920270   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:44.920322   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.925102   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:44.925158   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:44.967386   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:44.967406   67541 cri.go:89] found id: ""
	I1004 04:28:44.967416   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:44.967471   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.971979   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:44.972056   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:45.009842   67541 cri.go:89] found id: ""
	I1004 04:28:45.009869   67541 logs.go:282] 0 containers: []
	W1004 04:28:45.009881   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:45.009890   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:45.009952   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:45.055166   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:45.055189   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:45.055194   67541 cri.go:89] found id: ""
	I1004 04:28:45.055201   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:45.055258   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:45.060362   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:45.066118   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:45.066351   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:45.128185   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:45.128221   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:45.270042   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:45.270084   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:45.309065   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:45.309093   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:45.352299   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:45.352327   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:45.401846   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:45.401882   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:45.447474   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:45.447530   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:45.500734   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:45.500765   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:46.040224   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:46.040275   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:46.112675   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:46.112716   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:46.128530   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:46.128553   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:46.175007   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:46.175039   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:46.222706   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:46.222738   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:44.623804   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:47.122548   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:45.142166   66755 addons.go:510] duration metric: took 1.786788452s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 04:28:45.642731   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:46.641705   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.641730   66755 pod_ready.go:82] duration metric: took 3.007270041s for pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.641743   66755 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.646744   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.646767   66755 pod_ready.go:82] duration metric: took 5.01485ms for pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.646777   66755 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.652554   66755 pod_ready.go:93] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.652572   66755 pod_ready.go:82] duration metric: took 5.78883ms for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.652580   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:48.659404   66755 pod_ready.go:103] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:51.158765   66755 pod_ready.go:93] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.158787   66755 pod_ready.go:82] duration metric: took 4.506200726s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.158796   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.162949   66755 pod_ready.go:93] pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.162967   66755 pod_ready.go:82] duration metric: took 4.16468ms for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.162975   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9czbc" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.167309   66755 pod_ready.go:93] pod "kube-proxy-9czbc" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.167327   66755 pod_ready.go:82] duration metric: took 4.347415ms for pod "kube-proxy-9czbc" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.167334   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.171048   66755 pod_ready.go:93] pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.171065   66755 pod_ready.go:82] duration metric: took 3.724785ms for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.171071   66755 pod_ready.go:39] duration metric: took 7.545445402s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:51.171083   66755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:28:51.171126   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:51.186751   66755 api_server.go:72] duration metric: took 7.831380288s to wait for apiserver process to appear ...
	I1004 04:28:51.186782   66755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:28:51.186799   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:28:51.192753   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 200:
	ok
	I1004 04:28:51.194259   66755 api_server.go:141] control plane version: v1.31.1
	I1004 04:28:51.194284   66755 api_server.go:131] duration metric: took 7.491456ms to wait for apiserver health ...
	I1004 04:28:51.194292   66755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:28:51.241469   66755 system_pods.go:59] 9 kube-system pods found
	I1004 04:28:51.241491   66755 system_pods.go:61] "coredns-7c65d6cfc9-h5tbr" [87deb61f-2ce4-4d45-91da-c16557b5ef75] Running
	I1004 04:28:51.241496   66755 system_pods.go:61] "coredns-7c65d6cfc9-p52s6" [b9b3cd7f-f28e-4502-a55d-7792cfa5a6fe] Running
	I1004 04:28:51.241500   66755 system_pods.go:61] "etcd-embed-certs-934812" [487917e4-1b38-4b84-baf9-eacc1113718e] Running
	I1004 04:28:51.241503   66755 system_pods.go:61] "kube-apiserver-embed-certs-934812" [7fcfc483-3c53-415d-8329-5cae1ecd022f] Running
	I1004 04:28:51.241507   66755 system_pods.go:61] "kube-controller-manager-embed-certs-934812" [9ab25b16-916b-49f7-b34d-a12cf8552769] Running
	I1004 04:28:51.241514   66755 system_pods.go:61] "kube-proxy-9czbc" [dedff5a2-62b6-49c3-8369-9182d1c5bf7a] Running
	I1004 04:28:51.241517   66755 system_pods.go:61] "kube-scheduler-embed-certs-934812" [88a5acd5-108f-482a-ac24-7b75a30428ff] Running
	I1004 04:28:51.241525   66755 system_pods.go:61] "metrics-server-6867b74b74-fh2lk" [12e3e884-2ad3-4eaa-a505-822717e5bc8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:51.241528   66755 system_pods.go:61] "storage-provisioner" [67b4ef22-068c-4d14-840e-deab91c5ab94] Running
	I1004 04:28:51.241534   66755 system_pods.go:74] duration metric: took 47.237476ms to wait for pod list to return data ...
	I1004 04:28:51.241541   66755 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:28:51.438932   66755 default_sa.go:45] found service account: "default"
	I1004 04:28:51.438957   66755 default_sa.go:55] duration metric: took 197.410206ms for default service account to be created ...
	I1004 04:28:51.438966   66755 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:28:51.642064   66755 system_pods.go:86] 9 kube-system pods found
	I1004 04:28:51.642091   66755 system_pods.go:89] "coredns-7c65d6cfc9-h5tbr" [87deb61f-2ce4-4d45-91da-c16557b5ef75] Running
	I1004 04:28:51.642095   66755 system_pods.go:89] "coredns-7c65d6cfc9-p52s6" [b9b3cd7f-f28e-4502-a55d-7792cfa5a6fe] Running
	I1004 04:28:51.642100   66755 system_pods.go:89] "etcd-embed-certs-934812" [487917e4-1b38-4b84-baf9-eacc1113718e] Running
	I1004 04:28:51.642103   66755 system_pods.go:89] "kube-apiserver-embed-certs-934812" [7fcfc483-3c53-415d-8329-5cae1ecd022f] Running
	I1004 04:28:51.642107   66755 system_pods.go:89] "kube-controller-manager-embed-certs-934812" [9ab25b16-916b-49f7-b34d-a12cf8552769] Running
	I1004 04:28:51.642111   66755 system_pods.go:89] "kube-proxy-9czbc" [dedff5a2-62b6-49c3-8369-9182d1c5bf7a] Running
	I1004 04:28:51.642115   66755 system_pods.go:89] "kube-scheduler-embed-certs-934812" [88a5acd5-108f-482a-ac24-7b75a30428ff] Running
	I1004 04:28:51.642121   66755 system_pods.go:89] "metrics-server-6867b74b74-fh2lk" [12e3e884-2ad3-4eaa-a505-822717e5bc8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:51.642124   66755 system_pods.go:89] "storage-provisioner" [67b4ef22-068c-4d14-840e-deab91c5ab94] Running
	I1004 04:28:51.642133   66755 system_pods.go:126] duration metric: took 203.1616ms to wait for k8s-apps to be running ...
	I1004 04:28:51.642139   66755 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:28:51.642176   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:51.658916   66755 system_svc.go:56] duration metric: took 16.763146ms WaitForService to wait for kubelet
	I1004 04:28:51.658948   66755 kubeadm.go:582] duration metric: took 8.303579518s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:28:51.658964   66755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:28:51.839048   66755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:28:51.839067   66755 node_conditions.go:123] node cpu capacity is 2
	I1004 04:28:51.839076   66755 node_conditions.go:105] duration metric: took 180.108785ms to run NodePressure ...
	I1004 04:28:51.839086   66755 start.go:241] waiting for startup goroutines ...
	I1004 04:28:51.839093   66755 start.go:246] waiting for cluster config update ...
	I1004 04:28:51.839103   66755 start.go:255] writing updated cluster config ...
	I1004 04:28:51.839343   66755 ssh_runner.go:195] Run: rm -f paused
	I1004 04:28:51.887283   66755 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:28:51.889326   66755 out.go:177] * Done! kubectl is now configured to use "embed-certs-934812" cluster and "default" namespace by default
	I1004 04:28:48.765066   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:48.780955   67541 api_server.go:72] duration metric: took 4m18.802753607s to wait for apiserver process to appear ...
	I1004 04:28:48.780988   67541 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:28:48.781022   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:48.781074   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:48.817315   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:48.817337   67541 cri.go:89] found id: ""
	I1004 04:28:48.817346   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:48.817406   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.821619   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:48.821676   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:48.860019   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:48.860043   67541 cri.go:89] found id: ""
	I1004 04:28:48.860052   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:48.860101   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.864005   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:48.864065   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:48.901273   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:48.901295   67541 cri.go:89] found id: ""
	I1004 04:28:48.901303   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:48.901353   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.905950   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:48.906007   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:48.939708   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:48.939735   67541 cri.go:89] found id: ""
	I1004 04:28:48.939745   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:48.939812   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.943625   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:48.943692   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:48.979452   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:48.979481   67541 cri.go:89] found id: ""
	I1004 04:28:48.979490   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:48.979550   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.983629   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:48.983692   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:49.021137   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:49.021160   67541 cri.go:89] found id: ""
	I1004 04:28:49.021169   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:49.021242   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.025644   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:49.025712   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:49.062410   67541 cri.go:89] found id: ""
	I1004 04:28:49.062437   67541 logs.go:282] 0 containers: []
	W1004 04:28:49.062447   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:49.062452   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:49.062499   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:49.098959   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:49.098990   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:49.098996   67541 cri.go:89] found id: ""
	I1004 04:28:49.099005   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:49.099067   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.103474   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.107824   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:49.107852   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:49.228249   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:49.228278   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:49.269454   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:49.269479   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:49.305639   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:49.305666   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:49.770318   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:49.770348   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:49.808468   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:49.808493   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:49.884965   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:49.884997   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:49.901874   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:49.901898   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:49.952844   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:49.952869   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:49.986100   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:49.986141   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:50.023082   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:50.023108   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:50.074848   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:50.074876   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:50.112513   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:50.112541   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:52.658644   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:28:52.663076   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 200:
	ok
	I1004 04:28:52.663997   67541 api_server.go:141] control plane version: v1.31.1
	I1004 04:28:52.664017   67541 api_server.go:131] duration metric: took 3.8830221s to wait for apiserver health ...
	I1004 04:28:52.664024   67541 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:28:52.664045   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:52.664085   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:52.704174   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:52.704193   67541 cri.go:89] found id: ""
	I1004 04:28:52.704200   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:52.704253   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.708388   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:52.708438   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:52.743028   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:52.743053   67541 cri.go:89] found id: ""
	I1004 04:28:52.743062   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:52.743108   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.747354   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:52.747405   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:52.782350   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:52.782373   67541 cri.go:89] found id: ""
	I1004 04:28:52.782382   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:52.782424   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.786336   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:52.786394   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:52.826929   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:52.826950   67541 cri.go:89] found id: ""
	I1004 04:28:52.826958   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:52.827018   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.831039   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:52.831094   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:52.865963   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:52.865984   67541 cri.go:89] found id: ""
	I1004 04:28:52.865992   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:52.866032   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.869982   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:52.870024   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:52.919060   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:52.919081   67541 cri.go:89] found id: ""
	I1004 04:28:52.919091   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:52.919139   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.923080   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:52.923131   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:52.962615   67541 cri.go:89] found id: ""
	I1004 04:28:52.962636   67541 logs.go:282] 0 containers: []
	W1004 04:28:52.962643   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:52.962649   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:52.962706   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:52.999914   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:52.999936   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:52.999940   67541 cri.go:89] found id: ""
	I1004 04:28:52.999947   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:52.999998   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:53.003894   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:53.007759   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:53.007776   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:53.021269   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:53.021289   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:53.088683   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:53.088711   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:53.127363   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:53.127387   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:53.163467   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:53.163490   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:53.212683   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:53.212717   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:49.123892   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:51.124121   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:53.124323   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:49.686881   67282 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:28:49.687234   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:28:49.687487   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:28:53.569320   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:53.569360   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:53.644197   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:53.644231   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:53.747465   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:53.747497   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:53.788761   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:53.788798   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:53.822705   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:53.822737   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:53.857525   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:53.857548   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:53.894880   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:53.894904   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:56.455254   67541 system_pods.go:59] 8 kube-system pods found
	I1004 04:28:56.455286   67541 system_pods.go:61] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running
	I1004 04:28:56.455293   67541 system_pods.go:61] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running
	I1004 04:28:56.455299   67541 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running
	I1004 04:28:56.455304   67541 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running
	I1004 04:28:56.455309   67541 system_pods.go:61] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:28:56.455314   67541 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running
	I1004 04:28:56.455322   67541 system_pods.go:61] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:56.455329   67541 system_pods.go:61] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:28:56.455338   67541 system_pods.go:74] duration metric: took 3.791308758s to wait for pod list to return data ...
	I1004 04:28:56.455347   67541 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:28:56.457799   67541 default_sa.go:45] found service account: "default"
	I1004 04:28:56.457817   67541 default_sa.go:55] duration metric: took 2.463452ms for default service account to be created ...
	I1004 04:28:56.457825   67541 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:28:56.462569   67541 system_pods.go:86] 8 kube-system pods found
	I1004 04:28:56.462593   67541 system_pods.go:89] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running
	I1004 04:28:56.462601   67541 system_pods.go:89] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running
	I1004 04:28:56.462608   67541 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running
	I1004 04:28:56.462615   67541 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running
	I1004 04:28:56.462620   67541 system_pods.go:89] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:28:56.462626   67541 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running
	I1004 04:28:56.462632   67541 system_pods.go:89] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:56.462637   67541 system_pods.go:89] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:28:56.462645   67541 system_pods.go:126] duration metric: took 4.814032ms to wait for k8s-apps to be running ...
	I1004 04:28:56.462657   67541 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:28:56.462749   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:56.478944   67541 system_svc.go:56] duration metric: took 16.282384ms WaitForService to wait for kubelet
	I1004 04:28:56.478966   67541 kubeadm.go:582] duration metric: took 4m26.500769346s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:28:56.478982   67541 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:28:56.481946   67541 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:28:56.481968   67541 node_conditions.go:123] node cpu capacity is 2
	I1004 04:28:56.481980   67541 node_conditions.go:105] duration metric: took 2.992423ms to run NodePressure ...
	I1004 04:28:56.481993   67541 start.go:241] waiting for startup goroutines ...
	I1004 04:28:56.482006   67541 start.go:246] waiting for cluster config update ...
	I1004 04:28:56.482018   67541 start.go:255] writing updated cluster config ...
	I1004 04:28:56.482450   67541 ssh_runner.go:195] Run: rm -f paused
	I1004 04:28:56.528299   67541 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:28:56.530289   67541 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-281471" cluster and "default" namespace by default
	I1004 04:28:55.625569   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:58.122544   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:54.687773   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:28:54.688026   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:29:00.124374   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:02.624622   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:05.123726   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:07.622036   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:04.688599   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:29:04.688808   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:29:09.623060   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:11.623590   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:12.123919   66293 pod_ready.go:82] duration metric: took 4m0.007496621s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	E1004 04:29:12.123939   66293 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 04:29:12.123946   66293 pod_ready.go:39] duration metric: took 4m3.607239118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:29:12.123960   66293 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:29:12.123985   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:12.124023   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:12.174748   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:12.174767   66293 cri.go:89] found id: ""
	I1004 04:29:12.174775   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:12.174823   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.179374   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:12.179436   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:12.219617   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:12.219637   66293 cri.go:89] found id: ""
	I1004 04:29:12.219646   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:12.219699   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.223774   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:12.223844   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:12.261339   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:12.261360   66293 cri.go:89] found id: ""
	I1004 04:29:12.261369   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:12.261424   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.265364   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:12.265414   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:12.313178   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:12.313197   66293 cri.go:89] found id: ""
	I1004 04:29:12.313206   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:12.313271   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.317440   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:12.317498   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:12.353037   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:12.353054   66293 cri.go:89] found id: ""
	I1004 04:29:12.353072   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:12.353125   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.357212   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:12.357272   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:12.392082   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:12.392106   66293 cri.go:89] found id: ""
	I1004 04:29:12.392115   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:12.392167   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.396333   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:12.396395   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:12.439298   66293 cri.go:89] found id: ""
	I1004 04:29:12.439329   66293 logs.go:282] 0 containers: []
	W1004 04:29:12.439337   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:12.439343   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:12.439387   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:12.478798   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:12.478814   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:12.478818   66293 cri.go:89] found id: ""
	I1004 04:29:12.478824   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:12.478866   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.483035   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.486977   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:12.486992   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:12.520849   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:12.520875   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:13.072628   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:13.072671   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:13.137973   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:13.138000   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:13.259585   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:13.259611   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:13.312315   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:13.312340   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:13.352351   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:13.352377   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:13.391319   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:13.391352   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:13.430681   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:13.430712   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:13.464929   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:13.464957   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:13.505312   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:13.505340   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:13.520476   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:13.520517   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:13.582723   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:13.582752   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:16.131437   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:29:16.150426   66293 api_server.go:72] duration metric: took 4m14.921074088s to wait for apiserver process to appear ...
	I1004 04:29:16.150457   66293 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:29:16.150498   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:16.150559   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:16.197236   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:16.197265   66293 cri.go:89] found id: ""
	I1004 04:29:16.197275   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:16.197341   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.202103   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:16.202187   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:16.236881   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:16.236907   66293 cri.go:89] found id: ""
	I1004 04:29:16.236916   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:16.236976   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.241220   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:16.241289   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:16.275727   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:16.275750   66293 cri.go:89] found id: ""
	I1004 04:29:16.275759   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:16.275828   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.280282   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:16.280352   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:16.320297   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:16.320323   66293 cri.go:89] found id: ""
	I1004 04:29:16.320332   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:16.320386   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.324982   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:16.325038   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:16.367062   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:16.367081   66293 cri.go:89] found id: ""
	I1004 04:29:16.367089   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:16.367143   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.371124   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:16.371182   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:16.405706   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:16.405728   66293 cri.go:89] found id: ""
	I1004 04:29:16.405738   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:16.405785   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.410027   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:16.410084   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:16.444937   66293 cri.go:89] found id: ""
	I1004 04:29:16.444961   66293 logs.go:282] 0 containers: []
	W1004 04:29:16.444971   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:16.444978   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:16.445032   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:16.480123   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:16.480153   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:16.480160   66293 cri.go:89] found id: ""
	I1004 04:29:16.480168   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:16.480228   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.484216   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.488156   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:16.488177   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:16.501573   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:16.501591   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:16.600789   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:16.600814   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:16.641604   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:16.641634   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:16.696735   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:16.696764   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:16.737153   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:16.737177   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:17.188490   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:17.188546   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:17.262072   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:17.262108   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:17.310881   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:17.310911   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:17.356105   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:17.356135   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:17.398916   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:17.398948   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:17.440122   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:17.440149   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:17.482529   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:17.482553   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:20.034163   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:29:20.039165   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I1004 04:29:20.040105   66293 api_server.go:141] control plane version: v1.31.1
	I1004 04:29:20.040124   66293 api_server.go:131] duration metric: took 3.889660333s to wait for apiserver health ...
	I1004 04:29:20.040131   66293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:29:20.040156   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:20.040203   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:20.078208   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:20.078234   66293 cri.go:89] found id: ""
	I1004 04:29:20.078244   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:20.078306   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.082751   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:20.082808   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:20.128002   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:20.128024   66293 cri.go:89] found id: ""
	I1004 04:29:20.128034   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:20.128084   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.132039   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:20.132097   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:20.171887   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:20.171911   66293 cri.go:89] found id: ""
	I1004 04:29:20.171921   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:20.171978   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.176095   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:20.176150   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:20.215155   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:20.215175   66293 cri.go:89] found id: ""
	I1004 04:29:20.215183   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:20.215241   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.219738   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:20.219814   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:20.256116   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:20.256134   66293 cri.go:89] found id: ""
	I1004 04:29:20.256142   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:20.256194   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.261201   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:20.261281   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:20.302328   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:20.302350   66293 cri.go:89] found id: ""
	I1004 04:29:20.302359   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:20.302414   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.306488   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:20.306551   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:20.341266   66293 cri.go:89] found id: ""
	I1004 04:29:20.341290   66293 logs.go:282] 0 containers: []
	W1004 04:29:20.341300   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:20.341307   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:20.341361   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:20.379560   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:20.379584   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:20.379589   66293 cri.go:89] found id: ""
	I1004 04:29:20.379598   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:20.379653   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.383816   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.388118   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:20.388137   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:20.487661   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:20.487686   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:20.539728   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:20.539754   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:20.577435   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:20.577463   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:20.616450   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:20.616480   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:20.658292   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:20.658316   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:20.733483   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:20.733515   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:20.749004   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:20.749033   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:20.799355   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:20.799383   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:20.839676   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:20.839699   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:20.874870   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:20.874896   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:20.912635   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:20.912658   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:20.968377   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:20.968405   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:23.820462   66293 system_pods.go:59] 8 kube-system pods found
	I1004 04:29:23.820491   66293 system_pods.go:61] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running
	I1004 04:29:23.820497   66293 system_pods.go:61] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running
	I1004 04:29:23.820501   66293 system_pods.go:61] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running
	I1004 04:29:23.820506   66293 system_pods.go:61] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running
	I1004 04:29:23.820514   66293 system_pods.go:61] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running
	I1004 04:29:23.820517   66293 system_pods.go:61] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running
	I1004 04:29:23.820524   66293 system_pods.go:61] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:29:23.820529   66293 system_pods.go:61] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running
	I1004 04:29:23.820537   66293 system_pods.go:74] duration metric: took 3.780400092s to wait for pod list to return data ...
	I1004 04:29:23.820544   66293 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:29:23.823119   66293 default_sa.go:45] found service account: "default"
	I1004 04:29:23.823137   66293 default_sa.go:55] duration metric: took 2.58707ms for default service account to be created ...
	I1004 04:29:23.823144   66293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:29:23.827365   66293 system_pods.go:86] 8 kube-system pods found
	I1004 04:29:23.827385   66293 system_pods.go:89] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running
	I1004 04:29:23.827389   66293 system_pods.go:89] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running
	I1004 04:29:23.827393   66293 system_pods.go:89] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running
	I1004 04:29:23.827397   66293 system_pods.go:89] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running
	I1004 04:29:23.827400   66293 system_pods.go:89] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running
	I1004 04:29:23.827405   66293 system_pods.go:89] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running
	I1004 04:29:23.827410   66293 system_pods.go:89] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:29:23.827415   66293 system_pods.go:89] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running
	I1004 04:29:23.827422   66293 system_pods.go:126] duration metric: took 4.27475ms to wait for k8s-apps to be running ...
	I1004 04:29:23.827428   66293 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:29:23.827468   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:29:23.844696   66293 system_svc.go:56] duration metric: took 17.261418ms WaitForService to wait for kubelet
	I1004 04:29:23.844724   66293 kubeadm.go:582] duration metric: took 4m22.61537826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:29:23.844746   66293 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:29:23.847873   66293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:29:23.847892   66293 node_conditions.go:123] node cpu capacity is 2
	I1004 04:29:23.847902   66293 node_conditions.go:105] duration metric: took 3.149916ms to run NodePressure ...
	I1004 04:29:23.847915   66293 start.go:241] waiting for startup goroutines ...
	I1004 04:29:23.847923   66293 start.go:246] waiting for cluster config update ...
	I1004 04:29:23.847932   66293 start.go:255] writing updated cluster config ...
	I1004 04:29:23.848202   66293 ssh_runner.go:195] Run: rm -f paused
	I1004 04:29:23.894092   66293 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:29:23.895736   66293 out.go:177] * Done! kubectl is now configured to use "no-preload-658545" cluster and "default" namespace by default
	I1004 04:29:24.690241   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:29:24.690419   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:04.692816   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:04.693091   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:04.693114   67282 kubeadm.go:310] 
	I1004 04:30:04.693149   67282 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:30:04.693214   67282 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:30:04.693236   67282 kubeadm.go:310] 
	I1004 04:30:04.693295   67282 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:30:04.693327   67282 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:30:04.693451   67282 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:30:04.693460   67282 kubeadm.go:310] 
	I1004 04:30:04.693568   67282 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:30:04.693614   67282 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:30:04.693668   67282 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:30:04.693688   67282 kubeadm.go:310] 
	I1004 04:30:04.693843   67282 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:30:04.693966   67282 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:30:04.693982   67282 kubeadm.go:310] 
	I1004 04:30:04.694097   67282 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:30:04.694218   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:30:04.694305   67282 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:30:04.694387   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:30:04.694399   67282 kubeadm.go:310] 
	I1004 04:30:04.695379   67282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:30:04.695478   67282 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:30:04.695566   67282 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1004 04:30:04.695695   67282 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1004 04:30:04.695742   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:30:05.153635   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:30:05.170057   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:30:05.179541   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:30:05.179563   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:30:05.179611   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:30:05.188969   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:30:05.189025   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:30:05.198049   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:30:05.207031   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:30:05.207118   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:30:05.216934   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:30:05.226477   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:30:05.226541   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:30:05.236222   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:30:05.245314   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:30:05.245374   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:30:05.255762   67282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:30:05.329816   67282 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:30:05.329953   67282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:30:05.482342   67282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:30:05.482549   67282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:30:05.482692   67282 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:30:05.666400   67282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:30:05.668115   67282 out.go:235]   - Generating certificates and keys ...
	I1004 04:30:05.668217   67282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:30:05.668319   67282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:30:05.668460   67282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:30:05.668562   67282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:30:05.668660   67282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:30:05.668734   67282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:30:05.668823   67282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:30:05.668905   67282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:30:05.669010   67282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:30:05.669130   67282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:30:05.669186   67282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:30:05.669269   67282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:30:05.773446   67282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:30:05.823736   67282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:30:05.951294   67282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:30:06.250340   67282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:30:06.275797   67282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:30:06.276877   67282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:30:06.276944   67282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:30:06.437286   67282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:30:06.438849   67282 out.go:235]   - Booting up control plane ...
	I1004 04:30:06.438952   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:30:06.443688   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:30:06.444596   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:30:06.445267   67282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:30:06.457334   67282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:30:46.456706   67282 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:30:46.456854   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:46.457117   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:51.456986   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:51.457240   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:31:01.457062   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:31:01.457288   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:31:21.456976   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:31:21.457277   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:32:01.456978   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:32:01.457225   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:32:01.457249   67282 kubeadm.go:310] 
	I1004 04:32:01.457312   67282 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:32:01.457374   67282 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:32:01.457383   67282 kubeadm.go:310] 
	I1004 04:32:01.457434   67282 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:32:01.457512   67282 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:32:01.457678   67282 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:32:01.457692   67282 kubeadm.go:310] 
	I1004 04:32:01.457838   67282 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:32:01.457892   67282 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:32:01.457946   67282 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:32:01.457957   67282 kubeadm.go:310] 
	I1004 04:32:01.458102   67282 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:32:01.458217   67282 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:32:01.458233   67282 kubeadm.go:310] 
	I1004 04:32:01.458379   67282 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:32:01.458494   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:32:01.458604   67282 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:32:01.458699   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:32:01.458710   67282 kubeadm.go:310] 
	I1004 04:32:01.459157   67282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:32:01.459272   67282 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:32:01.459386   67282 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1004 04:32:01.459464   67282 kubeadm.go:394] duration metric: took 7m57.553695137s to StartCluster
	I1004 04:32:01.459522   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:32:01.459586   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:32:01.500997   67282 cri.go:89] found id: ""
	I1004 04:32:01.501026   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.501037   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:32:01.501044   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:32:01.501102   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:32:01.537240   67282 cri.go:89] found id: ""
	I1004 04:32:01.537276   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.537288   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:32:01.537295   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:32:01.537349   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:32:01.573959   67282 cri.go:89] found id: ""
	I1004 04:32:01.573995   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.574007   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:32:01.574013   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:32:01.574074   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:32:01.610614   67282 cri.go:89] found id: ""
	I1004 04:32:01.610645   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.610657   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:32:01.610665   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:32:01.610716   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:32:01.645520   67282 cri.go:89] found id: ""
	I1004 04:32:01.645554   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.645567   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:32:01.645574   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:32:01.645640   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:32:01.679787   67282 cri.go:89] found id: ""
	I1004 04:32:01.679814   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.679823   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:32:01.679828   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:32:01.679873   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:32:01.714860   67282 cri.go:89] found id: ""
	I1004 04:32:01.714883   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.714891   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:32:01.714897   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:32:01.714952   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:32:01.761170   67282 cri.go:89] found id: ""
	I1004 04:32:01.761198   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.761208   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:32:01.761220   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:32:01.761232   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:32:01.822966   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:32:01.823006   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:32:01.839482   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:32:01.839510   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:32:01.917863   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:32:01.917887   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:32:01.917901   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:32:02.027216   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:32:02.027247   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1004 04:32:02.069804   67282 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1004 04:32:02.069852   67282 out.go:270] * 
	W1004 04:32:02.069922   67282 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:32:02.069939   67282 out.go:270] * 
	W1004 04:32:02.070740   67282 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 04:32:02.074308   67282 out.go:201] 
	W1004 04:32:02.075387   67282 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:32:02.075427   67282 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1004 04:32:02.075458   67282 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1004 04:32:02.076675   67282 out.go:201] 
	
	
	==> CRI-O <==
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.919851648Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c1bbc35-122e-4c9c-a733-2e6349ba1849 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.920726058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba71ce1d-5935-44ef-9f78-aa10681877f2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.921028142Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016705921010416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba71ce1d-5935-44ef-9f78-aa10681877f2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.922044842Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b562f1c6-b23f-47af-89fe-76093c1a537f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.922092462Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b562f1c6-b23f-47af-89fe-76093c1a537f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.922335668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015930257912715,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade47f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc71dbacc9f4fe3d3d7366fc1b0b6b6b4f2fa3fa5d4a4b4e577ea6cf1fcb947,PodSandboxId:6c5decc647df41907c1da01451e76185d89e481f142634a98a4f446c0ff3eb4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015910640621213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61784d4d-400f-48bd-9ff5-aa2cdcc3a074,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e,PodSandboxId:f82e381a381ff249350431632919c7c16a3432c89cbac9328088c655294a40f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015906950070910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a5d64c0-542f-4972-b038-e675495a22b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab,PodSandboxId:f1ce6e93011b35981cc3f5b623e91e84f5a3e535d3162400ebc2beb06cfd609e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015899461910306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dvr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b5c79-3995-4de5-ae
b2-da465aeb66dd,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015899390469101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade4
7f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e,PodSandboxId:a82ca4aabbd99765a4c0d4f7ca3907c7106ce9d1336763e2f8fd6ae0c2234a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015894709776309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040cfee45caa04849ca5d3640f501d0b,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09,PodSandboxId:99ac6a716156d2a7970d0e30ae718859564ca5da3fd507b5cfe4a03a0f4e29fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015894689534500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af828b86d14cca95a4d137db49291e92,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6,PodSandboxId:c7b9243060eb547f3917374710b770dceebd61c310dabe87a5baed13c11793b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015894665966003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c43528b6eadbf4f9b537af1521300fc,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38,PodSandboxId:b6b07f874979b29898186370dae210baba8b89361f5a053125884fa3273482d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015894648623461,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84d8f4e17e13e92c39daa0117fee16,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b562f1c6-b23f-47af-89fe-76093c1a537f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.960722830Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=284d8fc4-1925-475f-a844-de779f0c2c9e name=/runtime.v1.RuntimeService/Version
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.960792467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=284d8fc4-1925-475f-a844-de779f0c2c9e name=/runtime.v1.RuntimeService/Version
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.961904574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75057e01-6e6d-4ef2-a97c-107d6f25eac3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.962452827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016705962223089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75057e01-6e6d-4ef2-a97c-107d6f25eac3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.963052872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25dc52b5-3a0d-4378-866d-442f356fdf8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.963120681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25dc52b5-3a0d-4378-866d-442f356fdf8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.963361615Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015930257912715,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade47f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc71dbacc9f4fe3d3d7366fc1b0b6b6b4f2fa3fa5d4a4b4e577ea6cf1fcb947,PodSandboxId:6c5decc647df41907c1da01451e76185d89e481f142634a98a4f446c0ff3eb4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015910640621213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61784d4d-400f-48bd-9ff5-aa2cdcc3a074,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e,PodSandboxId:f82e381a381ff249350431632919c7c16a3432c89cbac9328088c655294a40f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015906950070910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a5d64c0-542f-4972-b038-e675495a22b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab,PodSandboxId:f1ce6e93011b35981cc3f5b623e91e84f5a3e535d3162400ebc2beb06cfd609e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015899461910306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dvr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b5c79-3995-4de5-ae
b2-da465aeb66dd,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015899390469101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade4
7f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e,PodSandboxId:a82ca4aabbd99765a4c0d4f7ca3907c7106ce9d1336763e2f8fd6ae0c2234a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015894709776309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040cfee45caa04849ca5d3640f501d0b,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09,PodSandboxId:99ac6a716156d2a7970d0e30ae718859564ca5da3fd507b5cfe4a03a0f4e29fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015894689534500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af828b86d14cca95a4d137db49291e92,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6,PodSandboxId:c7b9243060eb547f3917374710b770dceebd61c310dabe87a5baed13c11793b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015894665966003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c43528b6eadbf4f9b537af1521300fc,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38,PodSandboxId:b6b07f874979b29898186370dae210baba8b89361f5a053125884fa3273482d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015894648623461,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84d8f4e17e13e92c39daa0117fee16,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25dc52b5-3a0d-4378-866d-442f356fdf8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.999346983Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a762de8-2991-44fc-b774-7e81a22afedb name=/runtime.v1.RuntimeService/Version
	Oct 04 04:38:25 no-preload-658545 crio[708]: time="2024-10-04 04:38:25.999428999Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a762de8-2991-44fc-b774-7e81a22afedb name=/runtime.v1.RuntimeService/Version
	Oct 04 04:38:26 no-preload-658545 crio[708]: time="2024-10-04 04:38:26.000515645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b435b05-e8e8-43a9-8586-1468bcac20b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:38:26 no-preload-658545 crio[708]: time="2024-10-04 04:38:26.000856450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016706000834909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b435b05-e8e8-43a9-8586-1468bcac20b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:38:26 no-preload-658545 crio[708]: time="2024-10-04 04:38:26.001454778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37509687-ce4a-4aa9-8d37-8e04abf6873c name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:38:26 no-preload-658545 crio[708]: time="2024-10-04 04:38:26.001519609Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37509687-ce4a-4aa9-8d37-8e04abf6873c name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:38:26 no-preload-658545 crio[708]: time="2024-10-04 04:38:26.001740898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015930257912715,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade47f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc71dbacc9f4fe3d3d7366fc1b0b6b6b4f2fa3fa5d4a4b4e577ea6cf1fcb947,PodSandboxId:6c5decc647df41907c1da01451e76185d89e481f142634a98a4f446c0ff3eb4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015910640621213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61784d4d-400f-48bd-9ff5-aa2cdcc3a074,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e,PodSandboxId:f82e381a381ff249350431632919c7c16a3432c89cbac9328088c655294a40f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015906950070910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a5d64c0-542f-4972-b038-e675495a22b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab,PodSandboxId:f1ce6e93011b35981cc3f5b623e91e84f5a3e535d3162400ebc2beb06cfd609e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015899461910306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dvr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b5c79-3995-4de5-ae
b2-da465aeb66dd,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015899390469101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade4
7f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e,PodSandboxId:a82ca4aabbd99765a4c0d4f7ca3907c7106ce9d1336763e2f8fd6ae0c2234a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015894709776309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040cfee45caa04849ca5d3640f501d0b,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09,PodSandboxId:99ac6a716156d2a7970d0e30ae718859564ca5da3fd507b5cfe4a03a0f4e29fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015894689534500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af828b86d14cca95a4d137db49291e92,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6,PodSandboxId:c7b9243060eb547f3917374710b770dceebd61c310dabe87a5baed13c11793b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015894665966003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c43528b6eadbf4f9b537af1521300fc,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38,PodSandboxId:b6b07f874979b29898186370dae210baba8b89361f5a053125884fa3273482d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015894648623461,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84d8f4e17e13e92c39daa0117fee16,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37509687-ce4a-4aa9-8d37-8e04abf6873c name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:38:26 no-preload-658545 crio[708]: time="2024-10-04 04:38:26.009564662Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34dc1412-1118-4218-adc4-93d11304deb4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 04:38:26 no-preload-658545 crio[708]: time="2024-10-04 04:38:26.009799776Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6c5decc647df41907c1da01451e76185d89e481f142634a98a4f446c0ff3eb4f,Metadata:&PodSandboxMetadata{Name:busybox,Uid:61784d4d-400f-48bd-9ff5-aa2cdcc3a074,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728015906813036548,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61784d4d-400f-48bd-9ff5-aa2cdcc3a074,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T04:24:58.916544993Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f82e381a381ff249350431632919c7c16a3432c89cbac9328088c655294a40f3,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-ppggj,Uid:6a5d64c0-542f-4972-b038-e675495a22b7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17280159067187396
76,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a5d64c0-542f-4972-b038-e675495a22b7,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T04:24:58.916546484Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8994405e47a92e0e256938d3ec7db23bf58ff83b3932365f35d141bf4fb6e3b5,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-zsf86,Uid:434282d8-7a99-4a76-b5c3-a880cf78ec35,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728015905020335183,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-zsf86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 434282d8-7a99-4a76-b5c3-a880cf78ec35,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T04:24:58.9
16542735Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f1ce6e93011b35981cc3f5b623e91e84f5a3e535d3162400ebc2beb06cfd609e,Metadata:&PodSandboxMetadata{Name:kube-proxy-dvr6b,Uid:365b5c79-3995-4de5-aeb2-da465aeb66dd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728015899238565548,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-dvr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b5c79-3995-4de5-aeb2-da465aeb66dd,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-04T04:24:58.916541356Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:28bf1888-f061-44ad-9c2b-0f2db0ade47f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728015899235798461,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade47f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2024-10-04T04:24:58.916543886Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a82ca4aabbd99765a4c0d4f7ca3907c7106ce9d1336763e2f8fd6ae0c2234a8f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-658545,Uid:040cfee45caa04849ca5d3640f501d0b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728015894451078000,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040cfee45caa04849ca5d3640f501d0b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 040cfee45caa04849ca5d3640f501d0b,kubernetes.io/config.seen: 2024-10-04T04:24:53.908415068Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b6b07f874979b29898186370dae210baba8b89361f5a053125884fa3273482d7,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-658545,Uid:9f84d8f4e17e13e92c39daa0117fee16,Namespace:kube-system
,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728015894449064785,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84d8f4e17e13e92c39daa0117fee16,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.54:2379,kubernetes.io/config.hash: 9f84d8f4e17e13e92c39daa0117fee16,kubernetes.io/config.seen: 2024-10-04T04:24:53.993101131Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:99ac6a716156d2a7970d0e30ae718859564ca5da3fd507b5cfe4a03a0f4e29fd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-658545,Uid:af828b86d14cca95a4d137db49291e92,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728015894445653841,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-658545,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: af828b86d14cca95a4d137db49291e92,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.54:8443,kubernetes.io/config.hash: af828b86d14cca95a4d137db49291e92,kubernetes.io/config.seen: 2024-10-04T04:24:53.908416242Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c7b9243060eb547f3917374710b770dceebd61c310dabe87a5baed13c11793b3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-658545,Uid:8c43528b6eadbf4f9b537af1521300fc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728015894438650723,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c43528b6eadbf4f9b537af1521300fc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8c43528b6eadbf4f9b537af1521300fc,kube
rnetes.io/config.seen: 2024-10-04T04:24:53.908410966Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=34dc1412-1118-4218-adc4-93d11304deb4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 04:38:26 no-preload-658545 crio[708]: time="2024-10-04 04:38:26.010597158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdf0e8f0-a2b5-4f6d-b46b-41ba6ce0a1ad name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:38:26 no-preload-658545 crio[708]: time="2024-10-04 04:38:26.010645019Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdf0e8f0-a2b5-4f6d-b46b-41ba6ce0a1ad name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:38:26 no-preload-658545 crio[708]: time="2024-10-04 04:38:26.010798931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015930257912715,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade47f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc71dbacc9f4fe3d3d7366fc1b0b6b6b4f2fa3fa5d4a4b4e577ea6cf1fcb947,PodSandboxId:6c5decc647df41907c1da01451e76185d89e481f142634a98a4f446c0ff3eb4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015910640621213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61784d4d-400f-48bd-9ff5-aa2cdcc3a074,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e,PodSandboxId:f82e381a381ff249350431632919c7c16a3432c89cbac9328088c655294a40f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015906950070910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a5d64c0-542f-4972-b038-e675495a22b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab,PodSandboxId:f1ce6e93011b35981cc3f5b623e91e84f5a3e535d3162400ebc2beb06cfd609e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015899461910306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dvr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b5c79-3995-4de5-ae
b2-da465aeb66dd,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e,PodSandboxId:a82ca4aabbd99765a4c0d4f7ca3907c7106ce9d1336763e2f8fd6ae0c2234a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015894709776309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040cfee45caa04849ca5d3640f501d
0b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09,PodSandboxId:99ac6a716156d2a7970d0e30ae718859564ca5da3fd507b5cfe4a03a0f4e29fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015894689534500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af828b86d14cca95a4d137db49291e92,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6,PodSandboxId:c7b9243060eb547f3917374710b770dceebd61c310dabe87a5baed13c11793b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015894665966003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c43528b6eadbf4f9b537af152130
0fc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38,PodSandboxId:b6b07f874979b29898186370dae210baba8b89361f5a053125884fa3273482d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015894648623461,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84d8f4e17e13e92c39daa0117fee16,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdf0e8f0-a2b5-4f6d-b46b-41ba6ce0a1ad name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5451845c1793f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   90e3478943ec0       storage-provisioner
	1fc71dbacc9f4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   6c5decc647df4       busybox
	8f0f82fef0d93       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   f82e381a381ff       coredns-7c65d6cfc9-ppggj
	d3a50dddda4ab       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   f1ce6e93011b3       kube-proxy-dvr6b
	e1cf4915ff1e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   90e3478943ec0       storage-provisioner
	bd0fa97b8409f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   a82ca4aabbd99       kube-scheduler-no-preload-658545
	1d381a201b984       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   99ac6a716156d       kube-apiserver-no-preload-658545
	1f1e00105cb78       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   c7b9243060eb5       kube-controller-manager-no-preload-658545
	def980019915c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   b6b07f874979b       etcd-no-preload-658545
	
	
	==> coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34774 - 9872 "HINFO IN 4357990399947125494.2098345499879057467. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019654964s
	
	
	==> describe nodes <==
	Name:               no-preload-658545
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-658545
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=no-preload-658545
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T04_15_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 04:15:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-658545
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 04:38:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 04:35:40 +0000   Fri, 04 Oct 2024 04:15:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 04:35:40 +0000   Fri, 04 Oct 2024 04:15:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 04:35:40 +0000   Fri, 04 Oct 2024 04:15:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 04:35:40 +0000   Fri, 04 Oct 2024 04:25:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.54
	  Hostname:    no-preload-658545
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d759497abb79413c9c5a7b20b9f885c4
	  System UUID:                d759497a-bb79-413c-9c5a-7b20b9f885c4
	  Boot ID:                    a5102572-ba28-43f1-a510-6ba9cb4798b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-ppggj                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-no-preload-658545                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-658545             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-658545    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-dvr6b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-no-preload-658545             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-zsf86              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m (x2 over 22m)  kubelet          Node no-preload-658545 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x2 over 22m)  kubelet          Node no-preload-658545 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x2 over 22m)  kubelet          Node no-preload-658545 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                kubelet          Node no-preload-658545 status is now: NodeReady
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-658545 event: Registered Node no-preload-658545 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-658545 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-658545 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-658545 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-658545 event: Registered Node no-preload-658545 in Controller
	
	
	==> dmesg <==
	[Oct 4 04:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.059273] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.051436] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.085772] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.686255] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.643710] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.625518] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.063207] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066634] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.196955] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.142936] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.309370] systemd-fstab-generator[699]: Ignoring "noauto" option for root device
	[ +16.031370] systemd-fstab-generator[1234]: Ignoring "noauto" option for root device
	[  +0.062665] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.162172] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +3.721915] kauditd_printk_skb: 97 callbacks suppressed
	[Oct 4 04:25] systemd-fstab-generator[1986]: Ignoring "noauto" option for root device
	[  +3.701123] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.637724] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] <==
	{"level":"info","ts":"2024-10-04T04:24:55.044154Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-04T04:24:55.044679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 switched to configuration voters=(6864009345536280071)"}
	{"level":"info","ts":"2024-10-04T04:24:55.044757Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"770d524238a76c54","local-member-id":"5f41dc21f7a6c607","added-peer-id":"5f41dc21f7a6c607","added-peer-peer-urls":["https://192.168.72.54:2380"]}
	{"level":"info","ts":"2024-10-04T04:24:55.040986Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-04T04:24:55.044914Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"770d524238a76c54","local-member-id":"5f41dc21f7a6c607","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:24:55.044956Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:24:56.577341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-04T04:24:56.577432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-04T04:24:56.577463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 received MsgPreVoteResp from 5f41dc21f7a6c607 at term 2"}
	{"level":"info","ts":"2024-10-04T04:24:56.577508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 became candidate at term 3"}
	{"level":"info","ts":"2024-10-04T04:24:56.577517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 received MsgVoteResp from 5f41dc21f7a6c607 at term 3"}
	{"level":"info","ts":"2024-10-04T04:24:56.577526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 became leader at term 3"}
	{"level":"info","ts":"2024-10-04T04:24:56.577533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5f41dc21f7a6c607 elected leader 5f41dc21f7a6c607 at term 3"}
	{"level":"info","ts":"2024-10-04T04:24:56.580820Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:24:56.581789Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:24:56.582663Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.54:2379"}
	{"level":"info","ts":"2024-10-04T04:24:56.582951Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:24:56.583599Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:24:56.580774Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"5f41dc21f7a6c607","local-member-attributes":"{Name:no-preload-658545 ClientURLs:[https://192.168.72.54:2379]}","request-path":"/0/members/5f41dc21f7a6c607/attributes","cluster-id":"770d524238a76c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T04:24:56.584437Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T04:24:56.584465Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T04:24:56.585202Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T04:34:56.642467Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":823}
	{"level":"info","ts":"2024-10-04T04:34:56.651982Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":823,"took":"8.971572ms","hash":2623449157,"current-db-size-bytes":2551808,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2551808,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-10-04T04:34:56.652073Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2623449157,"revision":823,"compact-revision":-1}
	
	
	==> kernel <==
	 04:38:26 up 14 min,  0 users,  load average: 0.18, 0.16, 0.09
	Linux no-preload-658545 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] <==
	E1004 04:34:59.030114       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1004 04:34:59.030131       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1004 04:34:59.031277       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:34:59.031306       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 04:35:59.032332       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:35:59.032388       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1004 04:35:59.032334       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:35:59.032458       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1004 04:35:59.033826       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:35:59.033926       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 04:37:59.034390       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:37:59.034525       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1004 04:37:59.034391       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:37:59.034616       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1004 04:37:59.035770       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:37:59.035815       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] <==
	E1004 04:33:01.631745       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:33:02.147407       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:33:31.638092       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:33:32.155565       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:34:01.643571       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:34:02.164853       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:34:31.650865       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:34:32.173417       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:35:01.656981       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:35:02.184967       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:35:31.663453       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:35:32.192977       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 04:35:40.409772       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-658545"
	E1004 04:36:01.669093       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:36:02.200520       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 04:36:04.026892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="244.511µs"
	I1004 04:36:17.023708       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="72.538µs"
	E1004 04:36:31.674878       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:36:32.207795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:37:01.683095       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:37:02.214810       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:37:31.689081       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:37:32.223403       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:38:01.695821       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:38:02.232715       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 04:24:59.828842       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 04:24:59.856881       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.54"]
	E1004 04:24:59.857085       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 04:24:59.982333       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 04:24:59.982566       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 04:24:59.982670       1 server_linux.go:169] "Using iptables Proxier"
	I1004 04:25:00.000354       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 04:25:00.009747       1 server.go:483] "Version info" version="v1.31.1"
	I1004 04:25:00.009853       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:25:00.029155       1 config.go:328] "Starting node config controller"
	I1004 04:25:00.029583       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 04:25:00.031843       1 config.go:199] "Starting service config controller"
	I1004 04:25:00.051045       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 04:25:00.041867       1 config.go:105] "Starting endpoint slice config controller"
	I1004 04:25:00.053461       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 04:25:00.053708       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 04:25:00.132776       1 shared_informer.go:320] Caches are synced for node config
	I1004 04:25:00.152217       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] <==
	I1004 04:24:55.712204       1 serving.go:386] Generated self-signed cert in-memory
	W1004 04:24:57.979783       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 04:24:57.979999       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 04:24:57.980120       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 04:24:57.980157       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 04:24:58.046310       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1004 04:24:58.046583       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:24:58.055885       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1004 04:24:58.061540       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1004 04:24:58.061690       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 04:24:58.062313       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 04:24:58.163226       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 04:37:14 no-preload-658545 kubelet[1364]: E1004 04:37:14.152435    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016634151919890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:24 no-preload-658545 kubelet[1364]: E1004 04:37:24.154052    1364 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016644153699009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:24 no-preload-658545 kubelet[1364]: E1004 04:37:24.154139    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016644153699009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:27 no-preload-658545 kubelet[1364]: E1004 04:37:27.010352    1364 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zsf86" podUID="434282d8-7a99-4a76-b5c3-a880cf78ec35"
	Oct 04 04:37:34 no-preload-658545 kubelet[1364]: E1004 04:37:34.156020    1364 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016654155702503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:34 no-preload-658545 kubelet[1364]: E1004 04:37:34.156072    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016654155702503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:39 no-preload-658545 kubelet[1364]: E1004 04:37:39.009890    1364 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zsf86" podUID="434282d8-7a99-4a76-b5c3-a880cf78ec35"
	Oct 04 04:37:44 no-preload-658545 kubelet[1364]: E1004 04:37:44.163715    1364 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016664159746275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:44 no-preload-658545 kubelet[1364]: E1004 04:37:44.164002    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016664159746275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:52 no-preload-658545 kubelet[1364]: E1004 04:37:52.012043    1364 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zsf86" podUID="434282d8-7a99-4a76-b5c3-a880cf78ec35"
	Oct 04 04:37:54 no-preload-658545 kubelet[1364]: E1004 04:37:54.031068    1364 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 04:37:54 no-preload-658545 kubelet[1364]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 04:37:54 no-preload-658545 kubelet[1364]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 04:37:54 no-preload-658545 kubelet[1364]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 04:37:54 no-preload-658545 kubelet[1364]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 04:37:54 no-preload-658545 kubelet[1364]: E1004 04:37:54.165980    1364 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016674165643392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:37:54 no-preload-658545 kubelet[1364]: E1004 04:37:54.166003    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016674165643392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:38:04 no-preload-658545 kubelet[1364]: E1004 04:38:04.167790    1364 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016684167328978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:38:04 no-preload-658545 kubelet[1364]: E1004 04:38:04.167864    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016684167328978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:38:07 no-preload-658545 kubelet[1364]: E1004 04:38:07.009137    1364 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zsf86" podUID="434282d8-7a99-4a76-b5c3-a880cf78ec35"
	Oct 04 04:38:14 no-preload-658545 kubelet[1364]: E1004 04:38:14.169764    1364 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016694169405909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:38:14 no-preload-658545 kubelet[1364]: E1004 04:38:14.170195    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016694169405909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:38:22 no-preload-658545 kubelet[1364]: E1004 04:38:22.009552    1364 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zsf86" podUID="434282d8-7a99-4a76-b5c3-a880cf78ec35"
	Oct 04 04:38:24 no-preload-658545 kubelet[1364]: E1004 04:38:24.172714    1364 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016704172373634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:38:24 no-preload-658545 kubelet[1364]: E1004 04:38:24.172745    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016704172373634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] <==
	I1004 04:25:30.358062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 04:25:30.373088       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 04:25:30.373189       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 04:25:30.382209       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 04:25:30.382665       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5b66a53-6e63-4dde-adfd-df3bb1be9ea0", APIVersion:"v1", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-658545_5e71066c-469a-43ee-917a-9f4f186fd191 became leader
	I1004 04:25:30.382716       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-658545_5e71066c-469a-43ee-917a-9f4f186fd191!
	I1004 04:25:30.482901       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-658545_5e71066c-469a-43ee-917a-9f4f186fd191!
	
	
	==> storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] <==
	I1004 04:24:59.498625       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1004 04:25:29.503995       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-658545 -n no-preload-658545
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-658545 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-zsf86
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-658545 describe pod metrics-server-6867b74b74-zsf86
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-658545 describe pod metrics-server-6867b74b74-zsf86: exit status 1 (60.430727ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-zsf86" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-658545 describe pod metrics-server-6867b74b74-zsf86: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
E1004 04:32:08.993934   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
E1004 04:32:15.014060   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
E1004 04:35:12.074625   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
E1004 04:37:08.994048   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
E1004 04:37:15.014622   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-420062 -n old-k8s-version-420062
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-420062 -n old-k8s-version-420062: exit status 2 (223.914378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-420062" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062: exit status 2 (219.168572ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-420062 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-420062 logs -n 25: (1.60240253s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-934812            | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-617497             | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617497                  | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617497 --memory=2200 --alsologtostderr   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-617497 image list                           | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:18 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658545                  | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281471  | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-420062        | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-934812                 | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:19 UTC | 04 Oct 24 04:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-420062             | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281471       | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC | 04 Oct 24 04:28 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 04:21:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 04:21:23.276574   67541 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:21:23.276701   67541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:21:23.276710   67541 out.go:358] Setting ErrFile to fd 2...
	I1004 04:21:23.276715   67541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:21:23.276893   67541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:21:23.277439   67541 out.go:352] Setting JSON to false
	I1004 04:21:23.278387   67541 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7428,"bootTime":1728008255,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:21:23.278482   67541 start.go:139] virtualization: kvm guest
	I1004 04:21:23.280571   67541 out.go:177] * [default-k8s-diff-port-281471] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:21:23.282033   67541 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:21:23.282063   67541 notify.go:220] Checking for updates...
	I1004 04:21:23.284454   67541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:21:23.285843   67541 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:21:23.287026   67541 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:21:23.288328   67541 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:21:23.289544   67541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:21:23.291321   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:21:23.291979   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:21:23.292059   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:21:23.306995   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I1004 04:21:23.307440   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:21:23.308080   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:21:23.308106   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:21:23.308442   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:21:23.308642   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:21:23.308893   67541 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:21:23.309208   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:21:23.309280   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:21:23.323807   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I1004 04:21:23.324281   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:21:23.324777   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:21:23.324797   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:21:23.325085   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:21:23.325248   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:21:23.359916   67541 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 04:21:23.361482   67541 start.go:297] selected driver: kvm2
	I1004 04:21:23.361504   67541 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:21:23.361657   67541 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:21:23.362533   67541 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:21:23.362621   67541 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:21:23.378088   67541 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:21:23.378515   67541 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:21:23.378547   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:21:23.378591   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:21:23.378627   67541 start.go:340] cluster config:
	{Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:21:23.378727   67541 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:21:23.380705   67541 out.go:177] * Starting "default-k8s-diff-port-281471" primary control-plane node in "default-k8s-diff-port-281471" cluster
	I1004 04:21:20.068102   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:23.140106   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:23.381986   67541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:21:23.382036   67541 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 04:21:23.382048   67541 cache.go:56] Caching tarball of preloaded images
	I1004 04:21:23.382125   67541 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:21:23.382135   67541 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 04:21:23.382254   67541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/config.json ...
	I1004 04:21:23.382433   67541 start.go:360] acquireMachinesLock for default-k8s-diff-port-281471: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:21:29.220163   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:32.292105   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:38.372080   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:41.444091   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:47.524103   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:50.596091   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:56.676086   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:59.748055   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:05.828125   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:08.900042   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:14.980094   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:18.052114   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:24.132087   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:27.204139   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:33.284040   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:36.356076   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:42.436190   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:45.508075   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:51.588061   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:54.660042   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:00.740141   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:03.812099   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:09.892076   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:12.964133   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:15.968919   66755 start.go:364] duration metric: took 4m6.72532498s to acquireMachinesLock for "embed-certs-934812"
	I1004 04:23:15.968984   66755 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:15.968992   66755 fix.go:54] fixHost starting: 
	I1004 04:23:15.969309   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:15.969356   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:15.984739   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I1004 04:23:15.985214   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:15.985743   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:23:15.985769   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:15.986104   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:15.986289   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:15.986449   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:23:15.988237   66755 fix.go:112] recreateIfNeeded on embed-certs-934812: state=Stopped err=<nil>
	I1004 04:23:15.988263   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	W1004 04:23:15.988415   66755 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:15.990473   66755 out.go:177] * Restarting existing kvm2 VM for "embed-certs-934812" ...
	I1004 04:23:15.965929   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:15.965974   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:23:15.966321   66293 buildroot.go:166] provisioning hostname "no-preload-658545"
	I1004 04:23:15.966348   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:23:15.966530   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:23:15.968760   66293 machine.go:96] duration metric: took 4m37.423316886s to provisionDockerMachine
	I1004 04:23:15.968806   66293 fix.go:56] duration metric: took 4m37.446149084s for fixHost
	I1004 04:23:15.968814   66293 start.go:83] releasing machines lock for "no-preload-658545", held for 4m37.446179902s
	W1004 04:23:15.968836   66293 start.go:714] error starting host: provision: host is not running
	W1004 04:23:15.968935   66293 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1004 04:23:15.968946   66293 start.go:729] Will try again in 5 seconds ...
	I1004 04:23:15.991914   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Start
	I1004 04:23:15.992106   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring networks are active...
	I1004 04:23:15.992995   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring network default is active
	I1004 04:23:15.993392   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring network mk-embed-certs-934812 is active
	I1004 04:23:15.993728   66755 main.go:141] libmachine: (embed-certs-934812) Getting domain xml...
	I1004 04:23:15.994410   66755 main.go:141] libmachine: (embed-certs-934812) Creating domain...
	I1004 04:23:17.232262   66755 main.go:141] libmachine: (embed-certs-934812) Waiting to get IP...
	I1004 04:23:17.233339   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.233793   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.233879   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.233797   67957 retry.go:31] will retry after 221.075745ms: waiting for machine to come up
	I1004 04:23:17.456413   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.456917   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.456941   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.456869   67957 retry.go:31] will retry after 354.386237ms: waiting for machine to come up
	I1004 04:23:17.812523   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.812949   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.812973   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.812905   67957 retry.go:31] will retry after 338.999517ms: waiting for machine to come up
	I1004 04:23:18.153589   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:18.154029   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:18.154056   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:18.153987   67957 retry.go:31] will retry after 555.533205ms: waiting for machine to come up
	I1004 04:23:18.710680   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:18.711155   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:18.711181   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:18.711104   67957 retry.go:31] will retry after 733.812197ms: waiting for machine to come up
	I1004 04:23:20.970507   66293 start.go:360] acquireMachinesLock for no-preload-658545: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:23:19.447202   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:19.447644   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:19.447671   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:19.447600   67957 retry.go:31] will retry after 575.303848ms: waiting for machine to come up
	I1004 04:23:20.024465   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:20.024788   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:20.024819   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:20.024735   67957 retry.go:31] will retry after 894.593683ms: waiting for machine to come up
	I1004 04:23:20.920880   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:20.921499   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:20.921522   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:20.921480   67957 retry.go:31] will retry after 924.978895ms: waiting for machine to come up
	I1004 04:23:21.848064   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:21.848498   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:21.848619   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:21.848550   67957 retry.go:31] will retry after 1.554806984s: waiting for machine to come up
	I1004 04:23:23.404569   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:23.404936   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:23.404964   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:23.404884   67957 retry.go:31] will retry after 1.700496318s: waiting for machine to come up
	I1004 04:23:25.106988   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:25.107410   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:25.107441   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:25.107351   67957 retry.go:31] will retry after 1.913555474s: waiting for machine to come up
	I1004 04:23:27.022672   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:27.023134   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:27.023161   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:27.023096   67957 retry.go:31] will retry after 3.208946613s: waiting for machine to come up
	I1004 04:23:30.235462   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:30.235910   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:30.235942   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:30.235868   67957 retry.go:31] will retry after 3.125545279s: waiting for machine to come up
	I1004 04:23:33.364563   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.365007   66755 main.go:141] libmachine: (embed-certs-934812) Found IP for machine: 192.168.61.74
	I1004 04:23:33.365031   66755 main.go:141] libmachine: (embed-certs-934812) Reserving static IP address...
	I1004 04:23:33.365047   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has current primary IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.365595   66755 main.go:141] libmachine: (embed-certs-934812) Reserved static IP address: 192.168.61.74
	I1004 04:23:33.365628   66755 main.go:141] libmachine: (embed-certs-934812) Waiting for SSH to be available...
	I1004 04:23:33.365648   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "embed-certs-934812", mac: "52:54:00:25:fb:50", ip: "192.168.61.74"} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.365667   66755 main.go:141] libmachine: (embed-certs-934812) DBG | skip adding static IP to network mk-embed-certs-934812 - found existing host DHCP lease matching {name: "embed-certs-934812", mac: "52:54:00:25:fb:50", ip: "192.168.61.74"}
	I1004 04:23:33.365682   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Getting to WaitForSSH function...
	I1004 04:23:33.367835   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.368155   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.368185   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.368297   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Using SSH client type: external
	I1004 04:23:33.368322   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa (-rw-------)
	I1004 04:23:33.368359   66755 main.go:141] libmachine: (embed-certs-934812) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:23:33.368369   66755 main.go:141] libmachine: (embed-certs-934812) DBG | About to run SSH command:
	I1004 04:23:33.368377   66755 main.go:141] libmachine: (embed-certs-934812) DBG | exit 0
	I1004 04:23:33.496067   66755 main.go:141] libmachine: (embed-certs-934812) DBG | SSH cmd err, output: <nil>: 
	I1004 04:23:33.496559   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetConfigRaw
	I1004 04:23:33.497310   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:33.500858   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.501360   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.501403   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.501750   66755 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/config.json ...
	I1004 04:23:33.502058   66755 machine.go:93] provisionDockerMachine start ...
	I1004 04:23:33.502084   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:33.502303   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.505899   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.506442   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.506475   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.506686   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.506947   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.507165   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.507324   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.507541   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.507744   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.507757   66755 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:23:33.624518   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:23:33.624547   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.624795   66755 buildroot.go:166] provisioning hostname "embed-certs-934812"
	I1004 04:23:33.624826   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.625021   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.627597   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.627916   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.627948   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.628115   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.628312   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.628444   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.628608   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.628785   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.629023   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.629040   66755 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-934812 && echo "embed-certs-934812" | sudo tee /etc/hostname
	I1004 04:23:33.758642   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-934812
	
	I1004 04:23:33.758681   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.761325   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.761654   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.761696   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.761849   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.762034   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.762164   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.762297   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.762426   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.762636   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.762652   66755 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-934812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-934812/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-934812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:23:33.889571   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:33.889601   66755 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:23:33.889642   66755 buildroot.go:174] setting up certificates
	I1004 04:23:33.889654   66755 provision.go:84] configureAuth start
	I1004 04:23:33.889681   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.889992   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:33.892657   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.893063   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.893087   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.893310   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.895770   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.896126   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.896162   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.896328   66755 provision.go:143] copyHostCerts
	I1004 04:23:33.896397   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:23:33.896408   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:23:33.896472   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:23:33.896565   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:23:33.896573   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:23:33.896595   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:23:33.896652   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:23:33.896659   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:23:33.896678   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:23:33.896724   66755 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.embed-certs-934812 san=[127.0.0.1 192.168.61.74 embed-certs-934812 localhost minikube]
	I1004 04:23:33.997867   66755 provision.go:177] copyRemoteCerts
	I1004 04:23:33.997923   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:23:33.997950   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.001050   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.001422   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.001461   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.001733   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.001961   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.002125   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.002246   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.090823   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:23:34.116934   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1004 04:23:34.669084   67282 start.go:364] duration metric: took 2m46.052475725s to acquireMachinesLock for "old-k8s-version-420062"
	I1004 04:23:34.669158   67282 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:34.669168   67282 fix.go:54] fixHost starting: 
	I1004 04:23:34.669584   67282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:34.669640   67282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:34.686790   67282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I1004 04:23:34.687312   67282 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:34.687829   67282 main.go:141] libmachine: Using API Version  1
	I1004 04:23:34.687857   67282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:34.688238   67282 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:34.688415   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:34.688579   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetState
	I1004 04:23:34.690288   67282 fix.go:112] recreateIfNeeded on old-k8s-version-420062: state=Stopped err=<nil>
	I1004 04:23:34.690326   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	W1004 04:23:34.690467   67282 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:34.692283   67282 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-420062" ...
	I1004 04:23:34.143763   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 04:23:34.168897   66755 provision.go:87] duration metric: took 279.227966ms to configureAuth
	I1004 04:23:34.168929   66755 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:23:34.169096   66755 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:23:34.169168   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.171638   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.171952   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.171977   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.172178   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.172349   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.172503   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.172594   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.172717   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:34.172924   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:34.172943   66755 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:23:34.411661   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:23:34.411690   66755 machine.go:96] duration metric: took 909.61315ms to provisionDockerMachine
	I1004 04:23:34.411703   66755 start.go:293] postStartSetup for "embed-certs-934812" (driver="kvm2")
	I1004 04:23:34.411716   66755 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:23:34.411734   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.412070   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:23:34.412099   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.415246   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.415583   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.415643   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.415802   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.415997   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.416170   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.416322   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.507385   66755 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:23:34.511963   66755 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:23:34.511990   66755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:23:34.512064   66755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:23:34.512152   66755 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:23:34.512270   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:23:34.522375   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:34.547860   66755 start.go:296] duration metric: took 136.143527ms for postStartSetup
	I1004 04:23:34.547904   66755 fix.go:56] duration metric: took 18.578910472s for fixHost
	I1004 04:23:34.547931   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.550715   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.551031   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.551067   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.551194   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.551391   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.551568   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.551724   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.551903   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:34.552055   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:34.552064   66755 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:23:34.668944   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015814.641353752
	
	I1004 04:23:34.668966   66755 fix.go:216] guest clock: 1728015814.641353752
	I1004 04:23:34.668974   66755 fix.go:229] Guest: 2024-10-04 04:23:34.641353752 +0000 UTC Remote: 2024-10-04 04:23:34.547909289 +0000 UTC m=+265.449211021 (delta=93.444463ms)
	I1004 04:23:34.668993   66755 fix.go:200] guest clock delta is within tolerance: 93.444463ms
	I1004 04:23:34.668999   66755 start.go:83] releasing machines lock for "embed-certs-934812", held for 18.70003051s
	I1004 04:23:34.669024   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.669299   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:34.672346   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.672757   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.672796   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.672966   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673609   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673816   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673940   66755 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:23:34.673982   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.674020   66755 ssh_runner.go:195] Run: cat /version.json
	I1004 04:23:34.674043   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.676934   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677085   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677379   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.677406   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677449   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.677480   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677560   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.677677   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.677758   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.677811   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.677873   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.677928   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.677979   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.678022   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.761509   66755 ssh_runner.go:195] Run: systemctl --version
	I1004 04:23:34.784487   66755 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:23:34.934037   66755 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:23:34.942569   66755 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:23:34.942642   66755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:23:34.960164   66755 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:23:34.960197   66755 start.go:495] detecting cgroup driver to use...
	I1004 04:23:34.960276   66755 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:23:34.979195   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:23:34.994660   66755 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:23:34.994747   66755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:23:35.011209   66755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:23:35.031746   66755 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:23:35.146164   66755 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:23:35.287092   66755 docker.go:233] disabling docker service ...
	I1004 04:23:35.287167   66755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:23:35.308007   66755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:23:35.323235   66755 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:23:35.473583   66755 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:23:35.610098   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:23:35.624276   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:23:35.643810   66755 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:23:35.643873   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.655804   66755 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:23:35.655875   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.668260   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.679770   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.692649   66755 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:23:35.704364   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.715539   66755 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.739272   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.754538   66755 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:23:35.766476   66755 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:23:35.766566   66755 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:23:35.781677   66755 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:23:35.792640   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:35.910787   66755 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:23:36.015877   66755 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:23:36.015948   66755 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:23:36.021573   66755 start.go:563] Will wait 60s for crictl version
	I1004 04:23:36.021642   66755 ssh_runner.go:195] Run: which crictl
	I1004 04:23:36.025605   66755 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:23:36.064644   66755 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:23:36.064714   66755 ssh_runner.go:195] Run: crio --version
	I1004 04:23:36.094751   66755 ssh_runner.go:195] Run: crio --version
	I1004 04:23:36.127213   66755 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:23:34.693590   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .Start
	I1004 04:23:34.693792   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring networks are active...
	I1004 04:23:34.694582   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network default is active
	I1004 04:23:34.694917   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network mk-old-k8s-version-420062 is active
	I1004 04:23:34.695322   67282 main.go:141] libmachine: (old-k8s-version-420062) Getting domain xml...
	I1004 04:23:34.696052   67282 main.go:141] libmachine: (old-k8s-version-420062) Creating domain...
	I1004 04:23:35.995511   67282 main.go:141] libmachine: (old-k8s-version-420062) Waiting to get IP...
	I1004 04:23:35.996465   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:35.996962   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:35.997031   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:35.996923   68093 retry.go:31] will retry after 296.620059ms: waiting for machine to come up
	I1004 04:23:36.295737   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:36.296226   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:36.296257   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:36.296182   68093 retry.go:31] will retry after 311.736827ms: waiting for machine to come up
	I1004 04:23:36.610158   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:36.610804   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:36.610829   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:36.610759   68093 retry.go:31] will retry after 440.646496ms: waiting for machine to come up
	I1004 04:23:37.053487   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:37.053956   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:37.053981   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:37.053923   68093 retry.go:31] will retry after 550.190101ms: waiting for machine to come up
	I1004 04:23:37.605404   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:37.605775   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:37.605815   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:37.605743   68093 retry.go:31] will retry after 721.648529ms: waiting for machine to come up
	I1004 04:23:38.328819   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:38.329323   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:38.329362   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:38.329281   68093 retry.go:31] will retry after 825.234448ms: waiting for machine to come up
	I1004 04:23:36.128549   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:36.131439   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:36.131827   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:36.131856   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:36.132054   66755 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1004 04:23:36.136650   66755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:36.149563   66755 kubeadm.go:883] updating cluster {Name:embed-certs-934812 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:23:36.149691   66755 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:23:36.149738   66755 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:36.188235   66755 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:23:36.188316   66755 ssh_runner.go:195] Run: which lz4
	I1004 04:23:36.192619   66755 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:23:36.196876   66755 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:23:36.196909   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:23:37.711672   66755 crio.go:462] duration metric: took 1.519102092s to copy over tarball
	I1004 04:23:37.711752   66755 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:23:39.155736   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:39.156199   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:39.156229   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:39.156150   68093 retry.go:31] will retry after 970.793402ms: waiting for machine to come up
	I1004 04:23:40.128963   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:40.129454   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:40.129507   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:40.129419   68093 retry.go:31] will retry after 1.460395601s: waiting for machine to come up
	I1004 04:23:41.592145   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:41.592653   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:41.592677   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:41.592600   68093 retry.go:31] will retry after 1.397092356s: waiting for machine to come up
	I1004 04:23:42.992176   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:42.992670   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:42.992724   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:42.992663   68093 retry.go:31] will retry after 1.560294099s: waiting for machine to come up
	I1004 04:23:39.864408   66755 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.152629063s)
	I1004 04:23:39.864437   66755 crio.go:469] duration metric: took 2.152732931s to extract the tarball
	I1004 04:23:39.864446   66755 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:23:39.902496   66755 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:39.956348   66755 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:23:39.956373   66755 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:23:39.956381   66755 kubeadm.go:934] updating node { 192.168.61.74 8443 v1.31.1 crio true true} ...
	I1004 04:23:39.956509   66755 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-934812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:23:39.956572   66755 ssh_runner.go:195] Run: crio config
	I1004 04:23:40.014396   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:23:40.014423   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:23:40.014436   66755 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:23:40.014470   66755 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.74 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-934812 NodeName:embed-certs-934812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:23:40.014642   66755 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-934812"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:23:40.014728   66755 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:23:40.025328   66755 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:23:40.025441   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:23:40.035733   66755 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1004 04:23:40.057427   66755 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:23:40.078636   66755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1004 04:23:40.100583   66755 ssh_runner.go:195] Run: grep 192.168.61.74	control-plane.minikube.internal$ /etc/hosts
	I1004 04:23:40.104780   66755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:40.118484   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:40.245425   66755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:23:40.268739   66755 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812 for IP: 192.168.61.74
	I1004 04:23:40.268764   66755 certs.go:194] generating shared ca certs ...
	I1004 04:23:40.268792   66755 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:23:40.268962   66755 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:23:40.269022   66755 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:23:40.269035   66755 certs.go:256] generating profile certs ...
	I1004 04:23:40.269145   66755 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/client.key
	I1004 04:23:40.269226   66755 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.key.0181efa9
	I1004 04:23:40.269290   66755 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.key
	I1004 04:23:40.269436   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:23:40.269483   66755 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:23:40.269497   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:23:40.269535   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:23:40.269575   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:23:40.269607   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:23:40.269658   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:40.270269   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:23:40.316579   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:23:40.352928   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:23:40.383124   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:23:40.410211   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1004 04:23:40.442388   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:23:40.473580   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:23:40.501589   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:23:40.527299   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:23:40.551994   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:23:40.576644   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:23:40.601518   66755 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:23:40.620092   66755 ssh_runner.go:195] Run: openssl version
	I1004 04:23:40.626451   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:23:40.637754   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.642413   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.642472   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.648449   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:23:40.659371   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:23:40.670276   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.674793   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.674844   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.680550   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:23:40.691439   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:23:40.702237   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.706876   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.706937   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.712970   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:23:40.724505   66755 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:23:40.729486   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:23:40.735720   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:23:40.741680   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:23:40.747975   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:23:40.754056   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:23:40.760235   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:23:40.766463   66755 kubeadm.go:392] StartCluster: {Name:embed-certs-934812 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:23:40.766576   66755 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:23:40.766635   66755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:23:40.805927   66755 cri.go:89] found id: ""
	I1004 04:23:40.805995   66755 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:23:40.816693   66755 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:23:40.816717   66755 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:23:40.816770   66755 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:23:40.827024   66755 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:23:40.828056   66755 kubeconfig.go:125] found "embed-certs-934812" server: "https://192.168.61.74:8443"
	I1004 04:23:40.830076   66755 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:23:40.840637   66755 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.74
	I1004 04:23:40.840673   66755 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:23:40.840686   66755 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:23:40.840741   66755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:23:40.877659   66755 cri.go:89] found id: ""
	I1004 04:23:40.877737   66755 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:23:40.894712   66755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:23:40.904202   66755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:23:40.904224   66755 kubeadm.go:157] found existing configuration files:
	
	I1004 04:23:40.904290   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:23:40.913941   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:23:40.914003   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:23:40.924730   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:23:40.934706   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:23:40.934784   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:23:40.945008   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:23:40.954864   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:23:40.954949   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:23:40.965357   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:23:40.975380   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:23:40.975459   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:23:40.986157   66755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:23:41.001260   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:41.129150   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:41.839910   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.059079   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.132717   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.204227   66755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:23:42.204389   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:42.704572   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.205099   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.704555   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.720983   66755 api_server.go:72] duration metric: took 1.516755506s to wait for apiserver process to appear ...
	I1004 04:23:43.721020   66755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:23:43.721043   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.578729   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:23:46.578764   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:23:46.578780   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.611578   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:23:46.611609   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:23:46.721894   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.728611   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:46.728649   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:47.221889   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:47.229348   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:47.229382   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:47.721971   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:47.741433   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:47.741460   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:48.222154   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:48.226802   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 200:
	ok
	I1004 04:23:48.233611   66755 api_server.go:141] control plane version: v1.31.1
	I1004 04:23:48.233645   66755 api_server.go:131] duration metric: took 4.512616682s to wait for apiserver health ...
	I1004 04:23:48.233655   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:23:48.233662   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:23:48.235421   66755 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:23:44.555619   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:44.556128   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:44.556154   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:44.556061   68093 retry.go:31] will retry after 2.564674777s: waiting for machine to come up
	I1004 04:23:47.123819   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:47.124235   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:47.124263   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:47.124181   68093 retry.go:31] will retry after 2.408805702s: waiting for machine to come up
	I1004 04:23:48.236675   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:23:48.248304   66755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:23:48.273584   66755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:23:48.288132   66755 system_pods.go:59] 8 kube-system pods found
	I1004 04:23:48.288174   66755 system_pods.go:61] "coredns-7c65d6cfc9-z7pqn" [f206a8bf-5c18-49f2-9fae-a48a38d608a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:23:48.288208   66755 system_pods.go:61] "etcd-embed-certs-934812" [07a8f2db-6d47-469b-b0e4-749d1e106522] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:23:48.288218   66755 system_pods.go:61] "kube-apiserver-embed-certs-934812" [f36bc69a-a04e-40c2-8f78-a983ddbf28aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:23:48.288227   66755 system_pods.go:61] "kube-controller-manager-embed-certs-934812" [06d73118-fa31-4c98-b1e8-099611718b19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:23:48.288232   66755 system_pods.go:61] "kube-proxy-9qpgb" [6d833f16-4b8e-4409-99b6-214babe699c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 04:23:48.288238   66755 system_pods.go:61] "kube-scheduler-embed-certs-934812" [d076a245-49b6-4d8b-949a-2b559cd1d4d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:23:48.288243   66755 system_pods.go:61] "metrics-server-6867b74b74-d5b6b" [f4ec5d83-22a7-49e5-97e9-3519a29484fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:23:48.288250   66755 system_pods.go:61] "storage-provisioner" [2e76a95b-d6e2-4c1d-b954-3da8c2670a4a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 04:23:48.288259   66755 system_pods.go:74] duration metric: took 14.644463ms to wait for pod list to return data ...
	I1004 04:23:48.288265   66755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:23:48.293121   66755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:23:48.293153   66755 node_conditions.go:123] node cpu capacity is 2
	I1004 04:23:48.293166   66755 node_conditions.go:105] duration metric: took 4.895489ms to run NodePressure ...
	I1004 04:23:48.293184   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:48.633398   66755 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:23:48.639243   66755 kubeadm.go:739] kubelet initialised
	I1004 04:23:48.639282   66755 kubeadm.go:740] duration metric: took 5.842777ms waiting for restarted kubelet to initialise ...
	I1004 04:23:48.639293   66755 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:23:48.650460   66755 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:49.535979   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:49.536361   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:49.536388   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:49.536332   68093 retry.go:31] will retry after 4.242056709s: waiting for machine to come up
	I1004 04:23:50.657094   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:52.657717   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:55.089234   67541 start.go:364] duration metric: took 2m31.706739813s to acquireMachinesLock for "default-k8s-diff-port-281471"
	I1004 04:23:55.089300   67541 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:55.089311   67541 fix.go:54] fixHost starting: 
	I1004 04:23:55.089673   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:55.089718   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:55.110154   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I1004 04:23:55.110566   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:55.111001   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:23:55.111025   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:55.111417   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:55.111627   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:23:55.111794   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:23:55.113328   67541 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281471: state=Stopped err=<nil>
	I1004 04:23:55.113356   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	W1004 04:23:55.113537   67541 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:55.115190   67541 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-281471" ...
	I1004 04:23:53.783128   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.783631   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has current primary IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.783669   67282 main.go:141] libmachine: (old-k8s-version-420062) Found IP for machine: 192.168.50.146
	I1004 04:23:53.783684   67282 main.go:141] libmachine: (old-k8s-version-420062) Reserving static IP address...
	I1004 04:23:53.784173   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.784206   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | skip adding static IP to network mk-old-k8s-version-420062 - found existing host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"}
	I1004 04:23:53.784222   67282 main.go:141] libmachine: (old-k8s-version-420062) Reserved static IP address: 192.168.50.146
	I1004 04:23:53.784238   67282 main.go:141] libmachine: (old-k8s-version-420062) Waiting for SSH to be available...
	I1004 04:23:53.784250   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Getting to WaitForSSH function...
	I1004 04:23:53.786551   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.786985   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.787016   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.787207   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH client type: external
	I1004 04:23:53.787244   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa (-rw-------)
	I1004 04:23:53.787285   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:23:53.787301   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | About to run SSH command:
	I1004 04:23:53.787315   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | exit 0
	I1004 04:23:53.916121   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | SSH cmd err, output: <nil>: 
	I1004 04:23:53.916487   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetConfigRaw
	I1004 04:23:53.917200   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:53.919846   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.920295   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.920323   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.920641   67282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/config.json ...
	I1004 04:23:53.920902   67282 machine.go:93] provisionDockerMachine start ...
	I1004 04:23:53.920930   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:53.921137   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:53.923647   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.924000   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.924039   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.924198   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:53.924375   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:53.924508   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:53.924659   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:53.924796   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:53.925024   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:53.925036   67282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:23:54.044565   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:23:54.044595   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.044820   67282 buildroot.go:166] provisioning hostname "old-k8s-version-420062"
	I1004 04:23:54.044837   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.045006   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.047682   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.048032   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.048060   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.048186   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.048376   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.048525   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.048694   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.048853   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.049077   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.049098   67282 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-420062 && echo "old-k8s-version-420062" | sudo tee /etc/hostname
	I1004 04:23:54.183772   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-420062
	
	I1004 04:23:54.183835   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.186969   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.187333   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.187368   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.187754   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.188000   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.188177   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.188334   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.188559   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.188778   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.188803   67282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-420062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-420062/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-420062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:23:54.313827   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:54.313852   67282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:23:54.313896   67282 buildroot.go:174] setting up certificates
	I1004 04:23:54.313913   67282 provision.go:84] configureAuth start
	I1004 04:23:54.313925   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.314208   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:54.317028   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.317378   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.317408   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.317549   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.320292   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.320690   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.320718   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.320874   67282 provision.go:143] copyHostCerts
	I1004 04:23:54.320945   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:23:54.320957   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:23:54.321020   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:23:54.321144   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:23:54.321157   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:23:54.321184   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:23:54.321269   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:23:54.321279   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:23:54.321306   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:23:54.321378   67282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-420062 san=[127.0.0.1 192.168.50.146 localhost minikube old-k8s-version-420062]
	I1004 04:23:54.395370   67282 provision.go:177] copyRemoteCerts
	I1004 04:23:54.395422   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:23:54.395452   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.398647   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.399153   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.399194   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.399392   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.399582   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.399852   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.399991   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:54.491055   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:23:54.523206   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1004 04:23:54.549843   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:23:54.580403   67282 provision.go:87] duration metric: took 266.475364ms to configureAuth
	I1004 04:23:54.580438   67282 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:23:54.580645   67282 config.go:182] Loaded profile config "old-k8s-version-420062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1004 04:23:54.580736   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.583200   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.583489   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.583522   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.583672   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.583871   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.584066   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.584195   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.584402   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.584567   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.584582   67282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:23:54.835402   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:23:54.835436   67282 machine.go:96] duration metric: took 914.509404ms to provisionDockerMachine
	I1004 04:23:54.835451   67282 start.go:293] postStartSetup for "old-k8s-version-420062" (driver="kvm2")
	I1004 04:23:54.835466   67282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:23:54.835491   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:54.835870   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:23:54.835902   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.838257   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.838645   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.838670   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.838810   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.838972   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.839117   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.839247   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:54.927041   67282 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:23:54.931330   67282 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:23:54.931357   67282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:23:54.931424   67282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:23:54.931538   67282 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:23:54.931658   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:23:54.941402   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:54.967433   67282 start.go:296] duration metric: took 131.968424ms for postStartSetup
	I1004 04:23:54.967495   67282 fix.go:56] duration metric: took 20.29830643s for fixHost
	I1004 04:23:54.967523   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.970138   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.970485   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.970502   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.970802   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.971000   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.971164   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.971330   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.971560   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.971739   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.971751   67282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:23:55.089031   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015835.056238818
	
	I1004 04:23:55.089054   67282 fix.go:216] guest clock: 1728015835.056238818
	I1004 04:23:55.089063   67282 fix.go:229] Guest: 2024-10-04 04:23:55.056238818 +0000 UTC Remote: 2024-10-04 04:23:54.967501465 +0000 UTC m=+186.499621032 (delta=88.737353ms)
	I1004 04:23:55.089086   67282 fix.go:200] guest clock delta is within tolerance: 88.737353ms
	I1004 04:23:55.089093   67282 start.go:83] releasing machines lock for "old-k8s-version-420062", held for 20.419961099s
	I1004 04:23:55.089124   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.089472   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:55.092047   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.092519   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.092552   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.092784   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093364   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093566   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093670   67282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:23:55.093715   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:55.093808   67282 ssh_runner.go:195] Run: cat /version.json
	I1004 04:23:55.093834   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:55.096451   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.096829   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.096862   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.096881   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.097173   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:55.097364   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:55.097446   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.097474   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.097548   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:55.097685   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:55.097816   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:55.097823   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:55.097953   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:55.098106   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:55.207195   67282 ssh_runner.go:195] Run: systemctl --version
	I1004 04:23:55.214080   67282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:23:55.369882   67282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:23:55.376111   67282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:23:55.376171   67282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:23:55.393916   67282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:23:55.393945   67282 start.go:495] detecting cgroup driver to use...
	I1004 04:23:55.394015   67282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:23:55.411330   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:23:55.427665   67282 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:23:55.427734   67282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:23:55.445180   67282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:23:55.465131   67282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:23:55.596260   67282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:23:55.781647   67282 docker.go:233] disabling docker service ...
	I1004 04:23:55.781711   67282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:23:55.801252   67282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:23:55.817688   67282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:23:55.952563   67282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:23:56.081096   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:23:56.096194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:23:56.116859   67282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1004 04:23:56.116924   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.129060   67282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:23:56.129133   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.141246   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.158759   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.172580   67282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:23:56.192027   67282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:23:56.206698   67282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:23:56.206757   67282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:23:56.223074   67282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:23:56.241061   67282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:56.365616   67282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:23:56.474445   67282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:23:56.474519   67282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:23:56.480077   67282 start.go:563] Will wait 60s for crictl version
	I1004 04:23:56.480133   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:23:56.485207   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:23:56.537710   67282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:23:56.537802   67282 ssh_runner.go:195] Run: crio --version
	I1004 04:23:56.571679   67282 ssh_runner.go:195] Run: crio --version
	I1004 04:23:56.605639   67282 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1004 04:23:55.116525   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Start
	I1004 04:23:55.116723   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring networks are active...
	I1004 04:23:55.117665   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring network default is active
	I1004 04:23:55.118079   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring network mk-default-k8s-diff-port-281471 is active
	I1004 04:23:55.118565   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Getting domain xml...
	I1004 04:23:55.119417   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Creating domain...
	I1004 04:23:56.429715   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting to get IP...
	I1004 04:23:56.430752   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.431261   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.431353   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.431245   68239 retry.go:31] will retry after 200.843618ms: waiting for machine to come up
	I1004 04:23:56.633542   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.633974   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.634003   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.633923   68239 retry.go:31] will retry after 291.906374ms: waiting for machine to come up
	I1004 04:23:56.927325   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.927847   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.927880   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.927813   68239 retry.go:31] will retry after 374.509137ms: waiting for machine to come up
	I1004 04:23:57.304251   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.304713   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.304738   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:57.304671   68239 retry.go:31] will retry after 583.046975ms: waiting for machine to come up
	I1004 04:23:57.889410   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.889837   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.889868   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:57.889795   68239 retry.go:31] will retry after 549.483036ms: waiting for machine to come up
	I1004 04:23:56.606945   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:56.610421   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:56.610952   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:56.610976   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:56.611373   67282 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1004 04:23:56.615872   67282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:56.629783   67282 kubeadm.go:883] updating cluster {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:23:56.629932   67282 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 04:23:56.629983   67282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:56.690260   67282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:23:56.690343   67282 ssh_runner.go:195] Run: which lz4
	I1004 04:23:56.695808   67282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:23:56.701593   67282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:23:56.701623   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1004 04:23:54.156612   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"True"
	I1004 04:23:54.156637   66755 pod_ready.go:82] duration metric: took 5.506141622s for pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:54.156646   66755 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:56.164534   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:58.166994   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:58.440643   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:58.441080   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:58.441109   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:58.441034   68239 retry.go:31] will retry after 585.437747ms: waiting for machine to come up
	I1004 04:23:59.027951   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.028414   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.028441   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:59.028369   68239 retry.go:31] will retry after 773.32668ms: waiting for machine to come up
	I1004 04:23:59.803329   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.803752   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.803793   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:59.803722   68239 retry.go:31] will retry after 936.396482ms: waiting for machine to come up
	I1004 04:24:00.741805   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:00.742328   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:00.742372   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:00.742262   68239 retry.go:31] will retry after 1.294836266s: waiting for machine to come up
	I1004 04:24:02.038222   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:02.038750   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:02.038785   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:02.038699   68239 retry.go:31] will retry after 2.282660025s: waiting for machine to come up
	I1004 04:23:58.525796   67282 crio.go:462] duration metric: took 1.830039762s to copy over tarball
	I1004 04:23:58.525868   67282 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:24:01.514552   67282 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.98865618s)
	I1004 04:24:01.514585   67282 crio.go:469] duration metric: took 2.988759159s to extract the tarball
	I1004 04:24:01.514595   67282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:24:01.562130   67282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:01.598856   67282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:24:01.598882   67282 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:24:01.598960   67282 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:01.599035   67282 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.599047   67282 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.599048   67282 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1004 04:24:01.599020   67282 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.599025   67282 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.598967   67282 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.598967   67282 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.600760   67282 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.600772   67282 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1004 04:24:01.600767   67282 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:01.600791   67282 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.600802   67282 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.600804   67282 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.600807   67282 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.600840   67282 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.837527   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.877366   67282 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1004 04:24:01.877413   67282 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.877464   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:01.882328   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.914693   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.934055   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.941737   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.943929   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.944540   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.948337   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.970977   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.995537   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1004 04:24:02.127073   67282 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1004 04:24:02.127097   67282 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1004 04:24:02.127121   67282 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.127121   67282 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.127156   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.127159   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128471   67282 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1004 04:24:02.128532   67282 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.128535   67282 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1004 04:24:02.128560   67282 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.128571   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128595   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128598   67282 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1004 04:24:02.128627   67282 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.128669   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128730   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1004 04:24:02.128761   67282 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1004 04:24:02.128783   67282 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1004 04:24:02.128815   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.133675   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.133724   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.141911   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.141950   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.141989   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.142044   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.263733   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.263744   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.263798   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.265990   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.297523   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.297566   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.379282   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.379318   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.379331   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.417271   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.454521   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.454559   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.496644   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1004 04:24:02.533632   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1004 04:24:02.533690   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1004 04:24:02.533750   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1004 04:24:02.568138   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1004 04:24:02.568153   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1004 04:24:02.911933   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:03.055844   67282 cache_images.go:92] duration metric: took 1.456943316s to LoadCachedImages
	W1004 04:24:03.055959   67282 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1004 04:24:03.055976   67282 kubeadm.go:934] updating node { 192.168.50.146 8443 v1.20.0 crio true true} ...
	I1004 04:24:03.056087   67282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-420062 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:03.056162   67282 ssh_runner.go:195] Run: crio config
	I1004 04:24:03.103752   67282 cni.go:84] Creating CNI manager for ""
	I1004 04:24:03.103792   67282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:03.103805   67282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:03.103826   67282 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.146 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-420062 NodeName:old-k8s-version-420062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1004 04:24:03.103952   67282 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-420062"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:03.104008   67282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1004 04:24:03.114316   67282 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:03.114372   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:03.124059   67282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1004 04:24:03.143310   67282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:03.161143   67282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1004 04:24:03.178444   67282 ssh_runner.go:195] Run: grep 192.168.50.146	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:03.182235   67282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:03.195103   67282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:03.317820   67282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:03.334820   67282 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062 for IP: 192.168.50.146
	I1004 04:24:03.334840   67282 certs.go:194] generating shared ca certs ...
	I1004 04:24:03.334855   67282 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:03.335008   67282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:03.335049   67282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:03.335059   67282 certs.go:256] generating profile certs ...
	I1004 04:24:03.335156   67282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.key
	I1004 04:24:03.335212   67282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key.c1f9ed6b
	I1004 04:24:03.335260   67282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key
	I1004 04:24:03.335368   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:03.335394   67282 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:03.335401   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:03.335426   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:03.335451   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:03.335476   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:03.335518   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:03.336260   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:03.373985   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:03.408150   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:03.444219   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:03.493160   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1004 04:24:00.665171   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:02.815874   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:04.022715   66755 pod_ready.go:93] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.022744   66755 pod_ready.go:82] duration metric: took 9.866089641s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.022756   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.028094   66755 pod_ready.go:93] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.028115   66755 pod_ready.go:82] duration metric: took 5.350911ms for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.028123   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.033106   66755 pod_ready.go:93] pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.033124   66755 pod_ready.go:82] duration metric: took 4.995208ms for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.033132   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9qpgb" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.037388   66755 pod_ready.go:93] pod "kube-proxy-9qpgb" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.037409   66755 pod_ready.go:82] duration metric: took 4.270278ms for pod "kube-proxy-9qpgb" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.037420   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.042717   66755 pod_ready.go:93] pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.042737   66755 pod_ready.go:82] duration metric: took 5.30887ms for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.042747   66755 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.324259   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:04.324749   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:04.324811   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:04.324726   68239 retry.go:31] will retry after 2.070089599s: waiting for machine to come up
	I1004 04:24:06.396547   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:06.396991   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:06.397015   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:06.396944   68239 retry.go:31] will retry after 3.403718824s: waiting for machine to come up
	I1004 04:24:03.533084   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:24:03.565405   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:03.613938   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:24:03.642711   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:03.674784   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:03.706968   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:03.731329   67282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:03.749003   67282 ssh_runner.go:195] Run: openssl version
	I1004 04:24:03.755219   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:03.766499   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.771322   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.771413   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.778185   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:03.790581   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:03.802556   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.807312   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.807373   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.813595   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:03.825043   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:03.835389   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.840004   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.840051   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.847540   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:03.862303   67282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:03.868029   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:03.874811   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:03.880797   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:03.886622   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:03.892273   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:03.898129   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:03.905775   67282 kubeadm.go:392] StartCluster: {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:03.905852   67282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:03.905890   67282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:03.954627   67282 cri.go:89] found id: ""
	I1004 04:24:03.954702   67282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:03.965146   67282 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:03.965170   67282 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:03.965236   67282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:03.975404   67282 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:03.976362   67282 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-420062" does not appear in /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:24:03.976990   67282 kubeconfig.go:62] /home/jenkins/minikube-integration/19546-9647/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-420062" cluster setting kubeconfig missing "old-k8s-version-420062" context setting]
	I1004 04:24:03.977906   67282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:03.979485   67282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:03.989487   67282 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.146
	I1004 04:24:03.989517   67282 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:03.989529   67282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:03.989577   67282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:04.031536   67282 cri.go:89] found id: ""
	I1004 04:24:04.031607   67282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:04.048652   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:04.057813   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:04.057830   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:04.057867   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:24:04.066213   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:04.066252   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:04.074904   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:24:04.083485   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:04.083522   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:04.092314   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:24:04.100528   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:04.100572   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:04.109232   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:24:04.118051   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:04.118091   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:04.127430   67282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:04.137949   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:04.272627   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:04.940435   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.181288   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.268873   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.373549   67282 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:05.373653   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:05.874543   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.374154   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.874343   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:07.374744   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:07.874734   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:08.374255   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.050700   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:08.548473   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:09.802504   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:09.802912   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:09.802937   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:09.802870   68239 retry.go:31] will retry after 3.430575602s: waiting for machine to come up
	I1004 04:24:13.236792   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.237230   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Found IP for machine: 192.168.39.201
	I1004 04:24:13.237251   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Reserving static IP address...
	I1004 04:24:13.237268   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has current primary IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.237712   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-281471", mac: "52:54:00:cd:36:92", ip: "192.168.39.201"} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.237745   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Reserved static IP address: 192.168.39.201
	I1004 04:24:13.237765   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | skip adding static IP to network mk-default-k8s-diff-port-281471 - found existing host DHCP lease matching {name: "default-k8s-diff-port-281471", mac: "52:54:00:cd:36:92", ip: "192.168.39.201"}
	I1004 04:24:13.237786   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Getting to WaitForSSH function...
	I1004 04:24:13.237805   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for SSH to be available...
	I1004 04:24:13.240068   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.240354   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.240384   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.240514   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Using SSH client type: external
	I1004 04:24:13.240540   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa (-rw-------)
	I1004 04:24:13.240577   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:24:13.240594   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | About to run SSH command:
	I1004 04:24:13.240608   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | exit 0
	I1004 04:24:08.874627   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:09.374627   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:09.874278   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.374675   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.873949   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:11.373966   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:11.873775   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:12.373874   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:12.874010   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:13.374575   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.550171   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:13.049596   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:14.741098   66293 start.go:364] duration metric: took 53.770546651s to acquireMachinesLock for "no-preload-658545"
	I1004 04:24:14.741156   66293 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:24:14.741164   66293 fix.go:54] fixHost starting: 
	I1004 04:24:14.741565   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:14.741595   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:14.758364   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37947
	I1004 04:24:14.758823   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:14.759356   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:24:14.759383   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:14.759700   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:14.759895   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:14.760077   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:24:14.761849   66293 fix.go:112] recreateIfNeeded on no-preload-658545: state=Stopped err=<nil>
	I1004 04:24:14.761873   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	W1004 04:24:14.762037   66293 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:24:14.764123   66293 out.go:177] * Restarting existing kvm2 VM for "no-preload-658545" ...
	I1004 04:24:13.371830   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | SSH cmd err, output: <nil>: 
	I1004 04:24:13.372219   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetConfigRaw
	I1004 04:24:13.372817   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:13.375676   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.376080   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.376116   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.376393   67541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/config.json ...
	I1004 04:24:13.376616   67541 machine.go:93] provisionDockerMachine start ...
	I1004 04:24:13.376638   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:13.376845   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.379413   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.379847   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.379908   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.380015   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.380204   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.380360   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.380493   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.380657   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.380913   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.380988   67541 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:24:13.492488   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:24:13.492528   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.492749   67541 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281471"
	I1004 04:24:13.492768   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.492928   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.495691   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.496003   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.496031   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.496160   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.496368   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.496530   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.496651   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.496785   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.497017   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.497034   67541 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-281471 && echo "default-k8s-diff-port-281471" | sudo tee /etc/hostname
	I1004 04:24:13.627336   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-281471
	
	I1004 04:24:13.627364   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.630757   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.631162   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.631199   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.631486   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.631701   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.631874   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.632018   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.632216   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.632431   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.632457   67541 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-281471' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-281471/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-281471' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:24:13.758386   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:24:13.758413   67541 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:24:13.758462   67541 buildroot.go:174] setting up certificates
	I1004 04:24:13.758472   67541 provision.go:84] configureAuth start
	I1004 04:24:13.758484   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.758740   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:13.761590   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.761899   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.761939   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.762068   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.764293   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.764644   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.764672   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.764811   67541 provision.go:143] copyHostCerts
	I1004 04:24:13.764869   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:24:13.764880   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:24:13.764936   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:24:13.765046   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:24:13.765055   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:24:13.765075   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:24:13.765127   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:24:13.765135   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:24:13.765160   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:24:13.765235   67541 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-281471 san=[127.0.0.1 192.168.39.201 default-k8s-diff-port-281471 localhost minikube]
	I1004 04:24:14.075640   67541 provision.go:177] copyRemoteCerts
	I1004 04:24:14.075698   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:24:14.075722   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.078293   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.078658   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.078689   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.078827   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.079048   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.079213   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.079348   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.167232   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:24:14.193065   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1004 04:24:14.218112   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:24:14.243281   67541 provision.go:87] duration metric: took 484.783764ms to configureAuth
	I1004 04:24:14.243310   67541 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:24:14.243506   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:14.243593   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.246497   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.246837   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.246885   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.247019   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.247211   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.247384   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.247551   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.247719   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:14.247909   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:14.247923   67541 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:24:14.487651   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:24:14.487675   67541 machine.go:96] duration metric: took 1.11104473s to provisionDockerMachine
	I1004 04:24:14.487686   67541 start.go:293] postStartSetup for "default-k8s-diff-port-281471" (driver="kvm2")
	I1004 04:24:14.487696   67541 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:24:14.487733   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.488084   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:24:14.488114   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.490844   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.491198   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.491229   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.491372   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.491562   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.491700   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.491815   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.579398   67541 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:24:14.584068   67541 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:24:14.584098   67541 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:24:14.584179   67541 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:24:14.584274   67541 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:24:14.584379   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:24:14.594853   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:14.621833   67541 start.go:296] duration metric: took 134.135256ms for postStartSetup
	I1004 04:24:14.621874   67541 fix.go:56] duration metric: took 19.532563115s for fixHost
	I1004 04:24:14.621895   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.625077   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.625418   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.625443   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.625678   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.625900   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.626059   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.626205   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.626373   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:14.626589   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:14.626603   67541 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:24:14.740932   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015854.697826512
	
	I1004 04:24:14.740950   67541 fix.go:216] guest clock: 1728015854.697826512
	I1004 04:24:14.740957   67541 fix.go:229] Guest: 2024-10-04 04:24:14.697826512 +0000 UTC Remote: 2024-10-04 04:24:14.621877739 +0000 UTC m=+171.379203860 (delta=75.948773ms)
	I1004 04:24:14.741000   67541 fix.go:200] guest clock delta is within tolerance: 75.948773ms
	I1004 04:24:14.741007   67541 start.go:83] releasing machines lock for "default-k8s-diff-port-281471", held for 19.651737082s
	I1004 04:24:14.741031   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.741291   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:14.744142   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.744498   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.744518   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.744720   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745368   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745559   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745665   67541 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:24:14.745706   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.745802   67541 ssh_runner.go:195] Run: cat /version.json
	I1004 04:24:14.745843   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.748443   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748779   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.748813   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748838   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748927   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.749064   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.749245   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.749267   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.749283   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.749441   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.749481   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.749589   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.749725   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.749856   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.833632   67541 ssh_runner.go:195] Run: systemctl --version
	I1004 04:24:14.863812   67541 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:24:15.016823   67541 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:24:15.023613   67541 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:24:15.023696   67541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:24:15.042546   67541 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:24:15.042576   67541 start.go:495] detecting cgroup driver to use...
	I1004 04:24:15.042645   67541 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:24:15.060267   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:24:15.076088   67541 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:24:15.076155   67541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:24:15.091741   67541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:24:15.107153   67541 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:24:15.230591   67541 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:24:15.381704   67541 docker.go:233] disabling docker service ...
	I1004 04:24:15.381776   67541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:24:15.397616   67541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:24:15.412350   67541 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:24:15.569525   67541 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:24:15.690120   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:24:15.705348   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:24:15.728253   67541 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:24:15.728334   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.739875   67541 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:24:15.739951   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.751997   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.765898   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.777917   67541 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:24:15.791235   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.802390   67541 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.825385   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.837278   67541 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:24:15.848791   67541 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:24:15.848864   67541 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:24:15.870774   67541 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:24:15.883544   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:15.997406   67541 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:24:16.095391   67541 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:24:16.095508   67541 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:24:16.102427   67541 start.go:563] Will wait 60s for crictl version
	I1004 04:24:16.102510   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:24:16.106958   67541 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:24:16.150721   67541 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:24:16.150824   67541 ssh_runner.go:195] Run: crio --version
	I1004 04:24:16.181714   67541 ssh_runner.go:195] Run: crio --version
	I1004 04:24:16.214202   67541 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:24:16.215583   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:16.218418   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:16.218800   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:16.218831   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:16.219002   67541 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 04:24:16.223382   67541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:16.236443   67541 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:24:16.236565   67541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:24:16.236652   67541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:16.279095   67541 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:24:16.279158   67541 ssh_runner.go:195] Run: which lz4
	I1004 04:24:16.283684   67541 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:24:16.288436   67541 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:24:16.288472   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:24:17.853549   67541 crio.go:462] duration metric: took 1.569889689s to copy over tarball
	I1004 04:24:17.853631   67541 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:24:14.765651   66293 main.go:141] libmachine: (no-preload-658545) Calling .Start
	I1004 04:24:14.765886   66293 main.go:141] libmachine: (no-preload-658545) Ensuring networks are active...
	I1004 04:24:14.766761   66293 main.go:141] libmachine: (no-preload-658545) Ensuring network default is active
	I1004 04:24:14.767179   66293 main.go:141] libmachine: (no-preload-658545) Ensuring network mk-no-preload-658545 is active
	I1004 04:24:14.767706   66293 main.go:141] libmachine: (no-preload-658545) Getting domain xml...
	I1004 04:24:14.768478   66293 main.go:141] libmachine: (no-preload-658545) Creating domain...
	I1004 04:24:16.087556   66293 main.go:141] libmachine: (no-preload-658545) Waiting to get IP...
	I1004 04:24:16.088628   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.089032   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.089093   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.089008   68422 retry.go:31] will retry after 276.442313ms: waiting for machine to come up
	I1004 04:24:16.367448   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.367923   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.367953   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.367894   68422 retry.go:31] will retry after 291.504157ms: waiting for machine to come up
	I1004 04:24:16.661396   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.661958   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.662009   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.661932   68422 retry.go:31] will retry after 378.34293ms: waiting for machine to come up
	I1004 04:24:17.041431   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:17.041942   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:17.041970   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:17.041916   68422 retry.go:31] will retry after 553.613866ms: waiting for machine to come up
	I1004 04:24:17.596745   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:17.597294   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:17.597327   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:17.597259   68422 retry.go:31] will retry after 611.098402ms: waiting for machine to come up
	I1004 04:24:18.210083   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:18.210569   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:18.210592   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:18.210530   68422 retry.go:31] will retry after 691.8822ms: waiting for machine to come up
	I1004 04:24:13.873857   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:14.374241   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:14.873863   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.374063   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.873950   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:16.373819   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:16.874290   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:17.374357   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:17.874163   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:18.374160   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.049926   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:17.051060   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:20.132987   67541 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.279324141s)
	I1004 04:24:20.133023   67541 crio.go:469] duration metric: took 2.279442603s to extract the tarball
	I1004 04:24:20.133033   67541 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:24:20.171805   67541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:20.217431   67541 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:24:20.217458   67541 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:24:20.217468   67541 kubeadm.go:934] updating node { 192.168.39.201 8444 v1.31.1 crio true true} ...
	I1004 04:24:20.217586   67541 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-281471 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:20.217687   67541 ssh_runner.go:195] Run: crio config
	I1004 04:24:20.269529   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:24:20.269559   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:20.269569   67541 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:20.269604   67541 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.201 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-281471 NodeName:default-k8s-diff-port-281471 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:24:20.269822   67541 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.201
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-281471"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:20.269913   67541 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:24:20.281286   67541 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:20.281368   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:20.292186   67541 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1004 04:24:20.310972   67541 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:20.329420   67541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1004 04:24:20.348358   67541 ssh_runner.go:195] Run: grep 192.168.39.201	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:20.352641   67541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:20.366317   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:20.499648   67541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:20.518930   67541 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471 for IP: 192.168.39.201
	I1004 04:24:20.518954   67541 certs.go:194] generating shared ca certs ...
	I1004 04:24:20.518971   67541 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:20.519121   67541 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:20.519167   67541 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:20.519177   67541 certs.go:256] generating profile certs ...
	I1004 04:24:20.519279   67541 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/client.key
	I1004 04:24:20.519347   67541 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.key.6cd63ef9
	I1004 04:24:20.519381   67541 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.key
	I1004 04:24:20.519492   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:20.519527   67541 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:20.519539   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:20.519570   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:20.519614   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:20.519643   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:20.519710   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:20.520418   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:20.566110   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:20.613646   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:20.648416   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:20.678840   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1004 04:24:20.722021   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:24:20.749381   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:20.776777   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 04:24:20.803998   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:20.833182   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:20.859600   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:20.887732   67541 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:20.910566   67541 ssh_runner.go:195] Run: openssl version
	I1004 04:24:20.917151   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:20.930475   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.935819   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.935895   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.942607   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:20.954950   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:20.967348   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.972468   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.972543   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.979061   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:20.992010   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:21.008370   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.015101   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.015161   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.023491   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:21.035766   67541 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:21.041416   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:21.048405   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:21.055468   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:21.062228   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:21.068967   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:21.075984   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:21.086088   67541 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:21.086196   67541 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:21.086253   67541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:21.131997   67541 cri.go:89] found id: ""
	I1004 04:24:21.132061   67541 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:21.145219   67541 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:21.145237   67541 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:21.145289   67541 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:21.157041   67541 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:21.158724   67541 kubeconfig.go:125] found "default-k8s-diff-port-281471" server: "https://192.168.39.201:8444"
	I1004 04:24:21.162295   67541 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:21.173771   67541 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.201
	I1004 04:24:21.173806   67541 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:21.173820   67541 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:21.173891   67541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:21.215149   67541 cri.go:89] found id: ""
	I1004 04:24:21.215216   67541 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:21.234432   67541 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:21.245688   67541 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:21.245707   67541 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:21.245758   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1004 04:24:21.256101   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:21.256168   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:21.267319   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1004 04:24:21.279995   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:21.280050   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:21.292588   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1004 04:24:21.304478   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:21.304545   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:21.317012   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1004 04:24:21.328769   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:21.328853   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:21.341597   67541 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:21.353901   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:21.483705   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.340208   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.582628   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.662202   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.773206   67541 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:22.773327   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.274151   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:18.903981   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:18.904373   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:18.904398   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:18.904331   68422 retry.go:31] will retry after 1.022635653s: waiting for machine to come up
	I1004 04:24:19.929163   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:19.929707   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:19.929749   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:19.929656   68422 retry.go:31] will retry after 939.130061ms: waiting for machine to come up
	I1004 04:24:20.870067   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:20.870578   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:20.870606   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:20.870521   68422 retry.go:31] will retry after 1.673919202s: waiting for machine to come up
	I1004 04:24:22.546229   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:22.546621   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:22.546650   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:22.546569   68422 retry.go:31] will retry after 1.962556159s: waiting for machine to come up
	I1004 04:24:18.874214   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.374670   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.874355   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:20.374335   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:20.874299   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:21.374492   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:21.874293   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:22.373890   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:22.874622   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.374639   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.552128   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:22.050844   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:24.051071   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:23.774477   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.807536   67541 api_server.go:72] duration metric: took 1.034328656s to wait for apiserver process to appear ...
	I1004 04:24:23.807569   67541 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:24:23.807593   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.646266   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:26.646299   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:26.646319   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.696828   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:26.696856   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:26.808107   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.819887   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:26.819947   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:27.308535   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:27.317320   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:27.317372   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:27.807868   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:27.817762   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:27.817805   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:28.307660   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:28.313515   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 200:
	ok
	I1004 04:24:28.320539   67541 api_server.go:141] control plane version: v1.31.1
	I1004 04:24:28.320568   67541 api_server.go:131] duration metric: took 4.512991081s to wait for apiserver health ...
	I1004 04:24:28.320578   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:24:28.320586   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:28.322138   67541 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:24:24.511356   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:24.511886   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:24.511917   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:24.511843   68422 retry.go:31] will retry after 2.5950382s: waiting for machine to come up
	I1004 04:24:27.109018   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:27.109474   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:27.109503   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:27.109451   68422 retry.go:31] will retry after 2.984182925s: waiting for machine to come up
	I1004 04:24:23.873822   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:24.373911   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:24.874756   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:25.374035   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:25.873874   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.374503   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.874371   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:27.374335   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:27.873941   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:28.373861   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.550974   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:28.552007   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:28.323513   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:24:28.336556   67541 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:24:28.358371   67541 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:24:28.373163   67541 system_pods.go:59] 8 kube-system pods found
	I1004 04:24:28.373204   67541 system_pods.go:61] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:24:28.373217   67541 system_pods.go:61] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:24:28.373228   67541 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:24:28.373239   67541 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:24:28.373246   67541 system_pods.go:61] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:24:28.373256   67541 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:24:28.373267   67541 system_pods.go:61] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:24:28.373273   67541 system_pods.go:61] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:24:28.373283   67541 system_pods.go:74] duration metric: took 14.891267ms to wait for pod list to return data ...
	I1004 04:24:28.373294   67541 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:24:28.378226   67541 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:24:28.378269   67541 node_conditions.go:123] node cpu capacity is 2
	I1004 04:24:28.378285   67541 node_conditions.go:105] duration metric: took 4.985167ms to run NodePressure ...
	I1004 04:24:28.378309   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:28.649369   67541 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:24:28.654563   67541 kubeadm.go:739] kubelet initialised
	I1004 04:24:28.654584   67541 kubeadm.go:740] duration metric: took 5.188927ms waiting for restarted kubelet to initialise ...
	I1004 04:24:28.654591   67541 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:28.662152   67541 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.668248   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.668278   67541 pod_ready.go:82] duration metric: took 6.099746ms for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.668287   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.668294   67541 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.675790   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.675811   67541 pod_ready.go:82] duration metric: took 7.509617ms for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.675823   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.675830   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.683763   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.683811   67541 pod_ready.go:82] duration metric: took 7.972006ms for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.683830   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.683839   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.761974   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.762006   67541 pod_ready.go:82] duration metric: took 78.154275ms for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.762021   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.762030   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.162590   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-proxy-4nnld" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.162623   67541 pod_ready.go:82] duration metric: took 400.583388ms for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.162634   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-proxy-4nnld" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.162643   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.562557   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.562584   67541 pod_ready.go:82] duration metric: took 399.929497ms for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.562595   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.562602   67541 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.963502   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.963528   67541 pod_ready.go:82] duration metric: took 400.919452ms for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.963539   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.963547   67541 pod_ready.go:39] duration metric: took 1.308947485s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:29.963561   67541 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:24:29.976241   67541 ops.go:34] apiserver oom_adj: -16
	I1004 04:24:29.976268   67541 kubeadm.go:597] duration metric: took 8.831025549s to restartPrimaryControlPlane
	I1004 04:24:29.976278   67541 kubeadm.go:394] duration metric: took 8.890203906s to StartCluster
	I1004 04:24:29.976295   67541 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:29.976372   67541 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:24:29.977898   67541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:29.978168   67541 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:24:29.978222   67541 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:24:29.978306   67541 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978330   67541 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-281471"
	W1004 04:24:29.978341   67541 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:24:29.978329   67541 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978353   67541 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978369   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:29.978367   67541 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-281471"
	I1004 04:24:29.978377   67541 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-281471"
	W1004 04:24:29.978387   67541 addons.go:243] addon metrics-server should already be in state true
	I1004 04:24:29.978413   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:29.978464   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:29.978731   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978783   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.978818   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978871   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.978839   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978970   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.979903   67541 out.go:177] * Verifying Kubernetes components...
	I1004 04:24:29.981432   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:29.994332   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42631
	I1004 04:24:29.994917   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:29.995488   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:29.995503   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:29.995865   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:29.996675   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:29.999180   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I1004 04:24:29.999220   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I1004 04:24:29.999564   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:29.999651   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.000157   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.000182   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.000262   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.000281   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.000379   67541 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-281471"
	W1004 04:24:30.000398   67541 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:24:30.000429   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:30.000613   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.000646   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.000790   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.000812   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.001163   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.001215   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.001259   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.001307   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.016576   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I1004 04:24:30.016650   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41997
	I1004 04:24:30.016796   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44507
	I1004 04:24:30.016993   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017079   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017138   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017536   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017557   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017548   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017584   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017537   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017621   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017929   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.017931   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.017970   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.018100   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.018152   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.018559   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.018600   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.020021   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.020637   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.022016   67541 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:30.022018   67541 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:24:30.023395   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:24:30.023417   67541 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:24:30.023444   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.023489   67541 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:24:30.023506   67541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:24:30.023528   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.027678   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028005   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028129   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.028180   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028538   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.028552   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.028560   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028724   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.028750   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.028881   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.028911   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.029013   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.029055   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.029124   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.037309   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46395
	I1004 04:24:30.037846   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.038328   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.038355   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.038683   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.038850   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.040366   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.040572   67541 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:24:30.040586   67541 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:24:30.040602   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.043618   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.044070   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.044092   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.044232   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.044413   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.044541   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.044687   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.194435   67541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:30.223577   67541 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-281471" to be "Ready" ...
	I1004 04:24:30.277458   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:24:30.316201   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:24:30.316227   67541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:24:30.333635   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:24:30.346511   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:24:30.346549   67541 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:24:30.405197   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:24:30.405219   67541 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:24:30.465174   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:24:31.307064   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307137   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307430   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.307442   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.307469   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.307546   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307574   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307691   67541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030198983s)
	I1004 04:24:31.307733   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307747   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307789   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.307811   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.309264   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.309275   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.309281   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.309291   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.309299   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.309538   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.309568   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.309583   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.315635   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.315653   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.315917   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.315933   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.411630   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.411658   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.411934   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.411951   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.411965   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.411983   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.411997   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.412221   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.412261   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.412274   67541 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-281471"
	I1004 04:24:31.412283   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.414267   67541 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1004 04:24:31.415607   67541 addons.go:510] duration metric: took 1.43738386s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1004 04:24:32.227563   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:30.095611   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:30.096032   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:30.096061   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:30.095981   68422 retry.go:31] will retry after 2.833386023s: waiting for machine to come up
	I1004 04:24:32.933027   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.933509   66293 main.go:141] libmachine: (no-preload-658545) Found IP for machine: 192.168.72.54
	I1004 04:24:32.933538   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has current primary IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.933544   66293 main.go:141] libmachine: (no-preload-658545) Reserving static IP address...
	I1004 04:24:32.933950   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "no-preload-658545", mac: "52:54:00:f5:6c:11", ip: "192.168.72.54"} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:32.933970   66293 main.go:141] libmachine: (no-preload-658545) Reserved static IP address: 192.168.72.54
	I1004 04:24:32.933988   66293 main.go:141] libmachine: (no-preload-658545) DBG | skip adding static IP to network mk-no-preload-658545 - found existing host DHCP lease matching {name: "no-preload-658545", mac: "52:54:00:f5:6c:11", ip: "192.168.72.54"}
	I1004 04:24:32.934002   66293 main.go:141] libmachine: (no-preload-658545) DBG | Getting to WaitForSSH function...
	I1004 04:24:32.934016   66293 main.go:141] libmachine: (no-preload-658545) Waiting for SSH to be available...
	I1004 04:24:32.936089   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.936440   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:32.936471   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.936572   66293 main.go:141] libmachine: (no-preload-658545) DBG | Using SSH client type: external
	I1004 04:24:32.936599   66293 main.go:141] libmachine: (no-preload-658545) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa (-rw-------)
	I1004 04:24:32.936637   66293 main.go:141] libmachine: (no-preload-658545) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:24:32.936650   66293 main.go:141] libmachine: (no-preload-658545) DBG | About to run SSH command:
	I1004 04:24:32.936661   66293 main.go:141] libmachine: (no-preload-658545) DBG | exit 0
	I1004 04:24:33.064432   66293 main.go:141] libmachine: (no-preload-658545) DBG | SSH cmd err, output: <nil>: 
	I1004 04:24:33.064791   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetConfigRaw
	I1004 04:24:33.065494   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:33.068038   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.068302   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.068325   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.068580   66293 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/config.json ...
	I1004 04:24:33.068837   66293 machine.go:93] provisionDockerMachine start ...
	I1004 04:24:33.068858   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:33.069072   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.071425   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.071748   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.071819   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.071946   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.072166   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.072332   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.072429   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.072587   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.072799   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.072814   66293 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:24:33.184623   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:24:33.184656   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.184912   66293 buildroot.go:166] provisioning hostname "no-preload-658545"
	I1004 04:24:33.184946   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.185126   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.188804   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.189189   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.189222   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.189419   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.189664   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.189839   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.190002   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.190128   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.190300   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.190313   66293 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-658545 && echo "no-preload-658545" | sudo tee /etc/hostname
	I1004 04:24:33.316349   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-658545
	
	I1004 04:24:33.316381   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.319460   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.319908   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.319945   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.320110   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.320301   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.320475   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.320628   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.320811   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.321031   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.321058   66293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-658545' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-658545/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-658545' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:24:28.874265   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:29.374364   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:29.874581   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:30.373909   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:30.874089   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.374708   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.874696   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:32.374061   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:32.874233   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:33.374290   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.050105   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:33.549870   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:33.444185   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:24:33.444221   66293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:24:33.444246   66293 buildroot.go:174] setting up certificates
	I1004 04:24:33.444257   66293 provision.go:84] configureAuth start
	I1004 04:24:33.444273   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.444569   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:33.447726   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.448137   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.448168   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.448332   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.450903   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.451311   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.451340   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.451479   66293 provision.go:143] copyHostCerts
	I1004 04:24:33.451559   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:24:33.451571   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:24:33.451638   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:24:33.451748   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:24:33.451763   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:24:33.451818   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:24:33.451897   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:24:33.451906   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:24:33.451931   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:24:33.451992   66293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.no-preload-658545 san=[127.0.0.1 192.168.72.54 localhost minikube no-preload-658545]
	I1004 04:24:33.577106   66293 provision.go:177] copyRemoteCerts
	I1004 04:24:33.577160   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:24:33.577183   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.579990   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.580330   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.580359   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.580496   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.580672   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.580810   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.580937   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:33.671123   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:24:33.697805   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1004 04:24:33.725408   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:24:33.751285   66293 provision.go:87] duration metric: took 307.010531ms to configureAuth
	I1004 04:24:33.751315   66293 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:24:33.751553   66293 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:33.751651   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.754476   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.754896   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.754938   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.755087   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.755282   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.755450   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.755592   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.755723   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.755969   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.755987   66293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:24:33.996596   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:24:33.996625   66293 machine.go:96] duration metric: took 927.772762ms to provisionDockerMachine
	I1004 04:24:33.996636   66293 start.go:293] postStartSetup for "no-preload-658545" (driver="kvm2")
	I1004 04:24:33.996645   66293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:24:33.996662   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:33.996958   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:24:33.996981   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.999632   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.000082   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.000111   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.000324   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.000537   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.000733   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.000924   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.089338   66293 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:24:34.094278   66293 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:24:34.094303   66293 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:24:34.094377   66293 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:24:34.094468   66293 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:24:34.094597   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:24:34.105335   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:34.134191   66293 start.go:296] duration metric: took 137.541908ms for postStartSetup
	I1004 04:24:34.134243   66293 fix.go:56] duration metric: took 19.393079344s for fixHost
	I1004 04:24:34.134269   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.137227   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.137599   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.137638   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.137779   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.137978   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.138156   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.138289   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.138459   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:34.138652   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:34.138663   66293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:24:34.250671   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015874.218795126
	
	I1004 04:24:34.250699   66293 fix.go:216] guest clock: 1728015874.218795126
	I1004 04:24:34.250709   66293 fix.go:229] Guest: 2024-10-04 04:24:34.218795126 +0000 UTC Remote: 2024-10-04 04:24:34.134249208 +0000 UTC m=+355.755571497 (delta=84.545918ms)
	I1004 04:24:34.250735   66293 fix.go:200] guest clock delta is within tolerance: 84.545918ms
	I1004 04:24:34.250742   66293 start.go:83] releasing machines lock for "no-preload-658545", held for 19.509615446s
	I1004 04:24:34.250763   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.250965   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:34.254332   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.254720   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.254746   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.254982   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255550   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255745   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255843   66293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:24:34.255907   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.255973   66293 ssh_runner.go:195] Run: cat /version.json
	I1004 04:24:34.255996   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.258802   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259036   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259118   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.259143   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259309   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.259487   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.259538   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.259563   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259633   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.259752   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.259845   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.259891   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.260042   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.260180   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.362345   66293 ssh_runner.go:195] Run: systemctl --version
	I1004 04:24:34.368641   66293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:24:34.527679   66293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:24:34.534212   66293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:24:34.534291   66293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:24:34.553539   66293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:24:34.553570   66293 start.go:495] detecting cgroup driver to use...
	I1004 04:24:34.553638   66293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:24:34.573489   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:24:34.588220   66293 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:24:34.588281   66293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:24:34.606014   66293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:24:34.621246   66293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:24:34.749423   66293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:24:34.915880   66293 docker.go:233] disabling docker service ...
	I1004 04:24:34.915960   66293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:24:34.936625   66293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:24:34.951534   66293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:24:35.089398   66293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:24:35.225269   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:24:35.241006   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:24:35.261586   66293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:24:35.261651   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.273501   66293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:24:35.273571   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.285392   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.296475   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.307774   66293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:24:35.319241   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.330361   66293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.349013   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.360603   66293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:24:35.371516   66293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:24:35.371581   66293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:24:35.387209   66293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:24:35.398144   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:35.528196   66293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:24:35.629120   66293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:24:35.629198   66293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:24:35.634243   66293 start.go:563] Will wait 60s for crictl version
	I1004 04:24:35.634307   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:35.638372   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:24:35.678659   66293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:24:35.678763   66293 ssh_runner.go:195] Run: crio --version
	I1004 04:24:35.715285   66293 ssh_runner.go:195] Run: crio --version
	I1004 04:24:35.751571   66293 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:24:34.228500   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:36.727080   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:37.228706   67541 node_ready.go:49] node "default-k8s-diff-port-281471" has status "Ready":"True"
	I1004 04:24:37.228745   67541 node_ready.go:38] duration metric: took 7.005123712s for node "default-k8s-diff-port-281471" to be "Ready" ...
	I1004 04:24:37.228760   67541 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:37.235256   67541 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:35.752737   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:35.755375   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:35.755763   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:35.755818   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:35.756063   66293 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1004 04:24:35.760601   66293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:35.773870   66293 kubeadm.go:883] updating cluster {Name:no-preload-658545 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:24:35.773970   66293 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:24:35.774001   66293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:35.813619   66293 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:24:35.813650   66293 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:24:35.813736   66293 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:35.813756   66293 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.813785   66293 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1004 04:24:35.813796   66293 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.813877   66293 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:35.813740   66293 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.813758   66293 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.813771   66293 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.815271   66293 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:35.815277   66293 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1004 04:24:35.815292   66293 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.815271   66293 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.815276   66293 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:35.815353   66293 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.815358   66293 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.815402   66293 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.956470   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.963066   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.965110   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.970080   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.972477   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.988253   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.013802   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1004 04:24:36.063322   66293 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1004 04:24:36.063364   66293 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.063405   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214786   66293 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1004 04:24:36.214827   66293 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.214867   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214928   66293 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1004 04:24:36.214961   66293 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1004 04:24:36.214995   66293 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.215023   66293 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1004 04:24:36.215043   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214965   66293 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.215081   66293 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1004 04:24:36.215047   66293 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.215100   66293 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.215110   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.215139   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.215147   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.274103   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.274103   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.274185   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.274292   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.274329   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.274343   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.392523   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.405236   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.405257   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.408799   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.408857   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.408860   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.511001   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.568598   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.568658   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.568720   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.568929   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.569021   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.599594   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1004 04:24:36.599733   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.696242   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1004 04:24:36.696294   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1004 04:24:36.696336   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1004 04:24:36.696363   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:36.696390   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:36.696399   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:36.696401   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1004 04:24:36.696449   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1004 04:24:36.696507   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:36.696521   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:36.696508   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1004 04:24:36.696563   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.696613   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.701522   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1004 04:24:37.132809   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:33.874344   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:34.374158   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:34.873848   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:35.373944   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:35.874697   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.373831   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.874231   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:37.374723   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:37.873861   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:38.374206   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.050420   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:38.051653   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:39.242026   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:41.244977   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:39.289977   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.593422519s)
	I1004 04:24:39.290020   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1004 04:24:39.290087   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.593446646s)
	I1004 04:24:39.290114   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1004 04:24:39.290136   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:39.290158   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.593739386s)
	I1004 04:24:39.290175   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1004 04:24:39.290097   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.593563637s)
	I1004 04:24:39.290203   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.593795645s)
	I1004 04:24:39.290208   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:39.290213   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1004 04:24:39.290213   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1004 04:24:39.290265   66293 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.157417466s)
	I1004 04:24:39.290314   66293 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1004 04:24:39.290348   66293 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:39.290392   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:40.750955   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.460708297s)
	I1004 04:24:40.751065   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1004 04:24:40.751102   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:40.750969   66293 ssh_runner.go:235] Completed: which crictl: (1.460561899s)
	I1004 04:24:40.751159   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:40.751190   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:43.031349   66293 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.280136047s)
	I1004 04:24:43.031395   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.280209115s)
	I1004 04:24:43.031566   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1004 04:24:43.031493   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:43.031600   66293 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:43.031641   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:43.084191   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:38.873705   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:39.374361   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:39.874144   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.373793   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.873796   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:41.374744   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:41.874442   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:42.374561   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:42.874638   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:43.374677   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.548818   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:42.550744   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:43.742554   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:44.244427   67541 pod_ready.go:93] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.244453   67541 pod_ready.go:82] duration metric: took 7.009169057s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.244463   67541 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.250595   67541 pod_ready.go:93] pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.250617   67541 pod_ready.go:82] duration metric: took 6.147481ms for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.250625   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.256537   67541 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.256570   67541 pod_ready.go:82] duration metric: took 5.936641ms for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.256583   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.262681   67541 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.262707   67541 pod_ready.go:82] duration metric: took 6.115804ms for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.262721   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.271089   67541 pod_ready.go:93] pod "kube-proxy-4nnld" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.271124   67541 pod_ready.go:82] duration metric: took 8.394207ms for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.271138   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.640124   67541 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.640158   67541 pod_ready.go:82] duration metric: took 369.009816ms for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.640172   67541 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:46.647420   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:45.132971   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.101305613s)
	I1004 04:24:45.133043   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1004 04:24:45.133071   66293 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.048844025s)
	I1004 04:24:45.133079   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:45.133110   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1004 04:24:45.133135   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:45.133179   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:47.228047   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.094844592s)
	I1004 04:24:47.228087   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1004 04:24:47.228089   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.0949275s)
	I1004 04:24:47.228119   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1004 04:24:47.228154   66293 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:47.228214   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:43.874583   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:44.374117   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:44.874398   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.374755   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.874039   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:46.374598   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:46.874446   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:47.374384   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:47.874596   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:48.374021   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.049760   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:47.551861   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:48.647700   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:50.648288   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:52.649288   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:50.627043   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.398805191s)
	I1004 04:24:50.627085   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1004 04:24:50.627122   66293 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:50.627191   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:51.282056   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1004 04:24:51.282099   66293 cache_images.go:123] Successfully loaded all cached images
	I1004 04:24:51.282104   66293 cache_images.go:92] duration metric: took 15.468441268s to LoadCachedImages
	I1004 04:24:51.282116   66293 kubeadm.go:934] updating node { 192.168.72.54 8443 v1.31.1 crio true true} ...
	I1004 04:24:51.282243   66293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-658545 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:51.282321   66293 ssh_runner.go:195] Run: crio config
	I1004 04:24:51.333133   66293 cni.go:84] Creating CNI manager for ""
	I1004 04:24:51.333162   66293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:51.333173   66293 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:51.333201   66293 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-658545 NodeName:no-preload-658545 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:24:51.333361   66293 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-658545"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:51.333419   66293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:24:51.344694   66293 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:51.344757   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:51.354990   66293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1004 04:24:51.372572   66293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:51.394129   66293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1004 04:24:51.412865   66293 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:51.416985   66293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:51.430835   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:51.559349   66293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:51.579093   66293 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545 for IP: 192.168.72.54
	I1004 04:24:51.579120   66293 certs.go:194] generating shared ca certs ...
	I1004 04:24:51.579140   66293 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:51.579318   66293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:51.579378   66293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:51.579391   66293 certs.go:256] generating profile certs ...
	I1004 04:24:51.579494   66293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/client.key
	I1004 04:24:51.579588   66293 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.key.10ceac04
	I1004 04:24:51.579648   66293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.key
	I1004 04:24:51.579808   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:51.579849   66293 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:51.579861   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:51.579891   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:51.579926   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:51.579961   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:51.580018   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:51.580871   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:51.630190   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:51.667887   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:51.715372   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:51.750063   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1004 04:24:51.776606   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:24:51.808943   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:51.839165   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:24:51.867862   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:51.898026   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:51.926810   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:51.955416   66293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:51.977621   66293 ssh_runner.go:195] Run: openssl version
	I1004 04:24:51.984023   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:51.997672   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.002969   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.003039   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.009473   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:52.021001   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:52.032834   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.037679   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.037742   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.044012   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:52.055377   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:52.066222   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.070747   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.070794   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.076922   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:52.087952   66293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:52.093052   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:52.099710   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:52.105841   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:52.112092   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:52.118428   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:52.125380   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:52.132085   66293 kubeadm.go:392] StartCluster: {Name:no-preload-658545 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:52.132193   66293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:52.132254   66293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:52.171814   66293 cri.go:89] found id: ""
	I1004 04:24:52.171882   66293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:52.182484   66293 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:52.182508   66293 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:52.182559   66293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:52.193069   66293 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:52.194108   66293 kubeconfig.go:125] found "no-preload-658545" server: "https://192.168.72.54:8443"
	I1004 04:24:52.196237   66293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:52.206551   66293 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I1004 04:24:52.206584   66293 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:52.206598   66293 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:52.206657   66293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:52.249698   66293 cri.go:89] found id: ""
	I1004 04:24:52.249762   66293 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:52.266001   66293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:52.276056   66293 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:52.276081   66293 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:52.276128   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:24:52.285610   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:52.285677   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:52.295177   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:24:52.304309   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:52.304362   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:52.314126   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:24:52.323562   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:52.323618   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:52.332906   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:24:52.342199   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:52.342252   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:52.351661   66293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:52.361071   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:52.493171   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:48.874471   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:49.374480   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:49.874689   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.373726   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.874543   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:51.373743   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:51.874513   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:52.374719   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:52.874305   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:53.374419   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.049668   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:52.050522   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:55.147282   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:57.648169   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:53.586422   66293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.093219868s)
	I1004 04:24:53.586448   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:53.794085   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:53.872327   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:54.004418   66293 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:54.004510   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.505463   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.004602   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.036834   66293 api_server.go:72] duration metric: took 1.032414365s to wait for apiserver process to appear ...
	I1004 04:24:55.036858   66293 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:24:55.036877   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:55.037325   66293 api_server.go:269] stopped: https://192.168.72.54:8443/healthz: Get "https://192.168.72.54:8443/healthz": dial tcp 192.168.72.54:8443: connect: connection refused
	I1004 04:24:55.537513   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:57.951637   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:57.951663   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:57.951676   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.010162   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:58.010188   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:58.037484   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.060069   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:58.060161   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:53.874725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.373903   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.874127   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.374051   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.874019   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:56.373828   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:56.874027   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:57.373914   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:57.874598   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:58.374106   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.550080   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:56.550541   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:59.051837   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:58.536932   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.541611   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:58.541634   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:59.037723   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:59.057378   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:59.057411   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:59.536994   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:59.545827   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I1004 04:24:59.554199   66293 api_server.go:141] control plane version: v1.31.1
	I1004 04:24:59.554238   66293 api_server.go:131] duration metric: took 4.517373336s to wait for apiserver health ...
	I1004 04:24:59.554247   66293 cni.go:84] Creating CNI manager for ""
	I1004 04:24:59.554253   66293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:59.555912   66293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:24:59.557009   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:24:59.590146   66293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:24:59.610903   66293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:24:59.634067   66293 system_pods.go:59] 8 kube-system pods found
	I1004 04:24:59.634109   66293 system_pods.go:61] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:24:59.634121   66293 system_pods.go:61] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:24:59.634131   66293 system_pods.go:61] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:24:59.634143   66293 system_pods.go:61] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:24:59.634151   66293 system_pods.go:61] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 04:24:59.634160   66293 system_pods.go:61] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:24:59.634168   66293 system_pods.go:61] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:24:59.634181   66293 system_pods.go:61] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 04:24:59.634189   66293 system_pods.go:74] duration metric: took 23.257716ms to wait for pod list to return data ...
	I1004 04:24:59.634198   66293 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:24:59.638128   66293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:24:59.638160   66293 node_conditions.go:123] node cpu capacity is 2
	I1004 04:24:59.638173   66293 node_conditions.go:105] duration metric: took 3.969841ms to run NodePressure ...
	I1004 04:24:59.638191   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:59.968829   66293 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:24:59.975495   66293 kubeadm.go:739] kubelet initialised
	I1004 04:24:59.975516   66293 kubeadm.go:740] duration metric: took 6.660196ms waiting for restarted kubelet to initialise ...
	I1004 04:24:59.975522   66293 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:00.084084   66293 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.113474   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.113498   66293 pod_ready.go:82] duration metric: took 29.379607ms for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.113507   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.113513   66293 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.128436   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "etcd-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.128463   66293 pod_ready.go:82] duration metric: took 14.94278ms for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.128475   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "etcd-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.128485   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.140033   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-apiserver-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.140059   66293 pod_ready.go:82] duration metric: took 11.56545ms for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.140068   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-apiserver-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.140077   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.157254   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.157286   66293 pod_ready.go:82] duration metric: took 17.197805ms for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.157298   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.157306   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.415110   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-proxy-dvr6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.415141   66293 pod_ready.go:82] duration metric: took 257.824162ms for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.415151   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-proxy-dvr6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.415157   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.815201   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-scheduler-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.815226   66293 pod_ready.go:82] duration metric: took 400.063468ms for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.815235   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-scheduler-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.815241   66293 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:01.214416   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:01.214448   66293 pod_ready.go:82] duration metric: took 399.197779ms for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:01.214461   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:01.214468   66293 pod_ready.go:39] duration metric: took 1.238937842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:01.214484   66293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:25:01.227389   66293 ops.go:34] apiserver oom_adj: -16
	I1004 04:25:01.227414   66293 kubeadm.go:597] duration metric: took 9.044898439s to restartPrimaryControlPlane
	I1004 04:25:01.227424   66293 kubeadm.go:394] duration metric: took 9.095346513s to StartCluster
	I1004 04:25:01.227441   66293 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:25:01.227520   66293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:25:01.229057   66293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:25:01.229318   66293 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:25:01.229389   66293 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:25:01.229496   66293 addons.go:69] Setting storage-provisioner=true in profile "no-preload-658545"
	I1004 04:25:01.229505   66293 addons.go:69] Setting default-storageclass=true in profile "no-preload-658545"
	I1004 04:25:01.229512   66293 addons.go:234] Setting addon storage-provisioner=true in "no-preload-658545"
	W1004 04:25:01.229520   66293 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:25:01.229524   66293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-658545"
	I1004 04:25:01.229558   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.229562   66293 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:25:01.229557   66293 addons.go:69] Setting metrics-server=true in profile "no-preload-658545"
	I1004 04:25:01.229607   66293 addons.go:234] Setting addon metrics-server=true in "no-preload-658545"
	W1004 04:25:01.229621   66293 addons.go:243] addon metrics-server should already be in state true
	I1004 04:25:01.229655   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.229968   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.229987   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.229971   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.230013   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.230030   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.230133   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.231051   66293 out.go:177] * Verifying Kubernetes components...
	I1004 04:25:01.232578   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:25:01.256283   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38313
	I1004 04:25:01.256939   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.257689   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.257720   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.258124   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.258358   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.262593   66293 addons.go:234] Setting addon default-storageclass=true in "no-preload-658545"
	W1004 04:25:01.262620   66293 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:25:01.262652   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.263036   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.263117   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.274653   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I1004 04:25:01.275130   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.275655   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.275685   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.276062   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.276652   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.276697   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.277272   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I1004 04:25:01.277756   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.278175   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.278191   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.278548   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.279116   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.279163   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.283719   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1004 04:25:01.284316   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.284814   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.284836   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.285180   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.285751   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.285801   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.297682   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I1004 04:25:01.297859   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I1004 04:25:01.298298   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.298418   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.298975   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.298995   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.299058   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.299077   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.299407   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.299470   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.299618   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.299660   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.301552   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.302048   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.303197   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1004 04:25:01.303600   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.304053   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.304068   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.304124   66293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:25:01.304234   66293 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:25:01.304403   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.304571   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.305715   66293 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:25:01.305735   66293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:25:01.305850   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:25:01.305861   66293 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:25:01.305876   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.305752   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.306101   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.306321   66293 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:25:01.306334   66293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:25:01.306349   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.310374   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.310752   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.310776   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.310888   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.311057   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.311192   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.311272   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.311338   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.311603   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312049   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.312072   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312175   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.312201   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312302   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.312468   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.312497   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.312586   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.312658   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.312681   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.312811   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.312948   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.478533   66293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:25:01.511716   66293 node_ready.go:35] waiting up to 6m0s for node "no-preload-658545" to be "Ready" ...
	I1004 04:25:01.557879   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:25:01.574381   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:25:01.601090   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:25:01.601112   66293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:25:01.630465   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:25:01.630495   66293 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:25:01.681089   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:25:01.681118   66293 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:25:01.703024   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:25:02.053562   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.053585   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.053855   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.053871   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.053882   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.053891   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.054118   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.054139   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.054128   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.061624   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.061646   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.061949   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.061967   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.061985   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.580950   66293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.00653263s)
	I1004 04:25:02.581002   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.581014   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.581350   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.581368   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.581376   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.581382   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.581459   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.581594   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.581606   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.702713   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.702739   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.703015   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.703028   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.703090   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.703106   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.703117   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.703347   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.703363   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.703380   66293 addons.go:475] Verifying addon metrics-server=true in "no-preload-658545"
	I1004 04:25:02.705335   66293 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 04:24:59.648241   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:01.649424   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:02.706605   66293 addons.go:510] duration metric: took 1.477226s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 04:24:58.874143   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:59.373810   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:59.874682   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:00.374672   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:00.873725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.374175   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.874724   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:02.374725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:02.874746   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:03.373689   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.548783   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:03.549515   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:04.146633   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:06.147540   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.147626   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:03.516566   66293 node_ready.go:53] node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:06.022815   66293 node_ready.go:53] node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:03.874594   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:04.374498   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:04.874377   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:05.374050   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:05.374139   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:05.412153   67282 cri.go:89] found id: ""
	I1004 04:25:05.412185   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.412195   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:05.412202   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:05.412264   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:05.446725   67282 cri.go:89] found id: ""
	I1004 04:25:05.446750   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.446758   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:05.446763   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:05.446816   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:05.487652   67282 cri.go:89] found id: ""
	I1004 04:25:05.487678   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.487686   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:05.487691   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:05.487752   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:05.526275   67282 cri.go:89] found id: ""
	I1004 04:25:05.526302   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.526310   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:05.526319   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:05.526375   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:05.565004   67282 cri.go:89] found id: ""
	I1004 04:25:05.565034   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.565045   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:05.565052   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:05.565101   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:05.601963   67282 cri.go:89] found id: ""
	I1004 04:25:05.601990   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.601998   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:05.602003   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:05.602051   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:05.638621   67282 cri.go:89] found id: ""
	I1004 04:25:05.638651   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.638660   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:05.638666   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:05.638720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:05.678042   67282 cri.go:89] found id: ""
	I1004 04:25:05.678071   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.678082   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:05.678093   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:05.678107   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:05.720677   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:05.720707   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:05.775219   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:05.775252   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:05.789748   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:05.789774   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:05.918752   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:05.918783   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:05.918798   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:08.493206   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:05.551035   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.048870   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:10.148154   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.645708   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.516666   66293 node_ready.go:49] node "no-preload-658545" has status "Ready":"True"
	I1004 04:25:08.516690   66293 node_ready.go:38] duration metric: took 7.004939371s for node "no-preload-658545" to be "Ready" ...
	I1004 04:25:08.516699   66293 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:08.522101   66293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.527132   66293 pod_ready.go:93] pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:08.527153   66293 pod_ready.go:82] duration metric: took 5.024648ms for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.527162   66293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.534172   66293 pod_ready.go:93] pod "etcd-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:08.534195   66293 pod_ready.go:82] duration metric: took 7.027189ms for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.534204   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:10.541186   66293 pod_ready.go:103] pod "kube-apiserver-no-preload-658545" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.040607   66293 pod_ready.go:93] pod "kube-apiserver-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.040640   66293 pod_ready.go:82] duration metric: took 3.506428875s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.040654   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.045845   66293 pod_ready.go:93] pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.045870   66293 pod_ready.go:82] duration metric: took 5.207108ms for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.045883   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.051587   66293 pod_ready.go:93] pod "kube-proxy-dvr6b" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.051604   66293 pod_ready.go:82] duration metric: took 5.715328ms for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.051613   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.116361   66293 pod_ready.go:93] pod "kube-scheduler-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.116401   66293 pod_ready.go:82] duration metric: took 64.774234ms for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.116411   66293 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.506490   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:08.506549   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:08.545875   67282 cri.go:89] found id: ""
	I1004 04:25:08.545909   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.545920   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:08.545933   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:08.545997   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:08.582348   67282 cri.go:89] found id: ""
	I1004 04:25:08.582375   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.582383   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:08.582389   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:08.582438   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:08.637763   67282 cri.go:89] found id: ""
	I1004 04:25:08.637797   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.637809   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:08.637816   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:08.637890   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:08.681171   67282 cri.go:89] found id: ""
	I1004 04:25:08.681205   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.681216   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:08.681224   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:08.681289   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:08.719513   67282 cri.go:89] found id: ""
	I1004 04:25:08.719542   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.719549   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:08.719555   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:08.719607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:08.762152   67282 cri.go:89] found id: ""
	I1004 04:25:08.762175   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.762183   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:08.762188   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:08.762251   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:08.799857   67282 cri.go:89] found id: ""
	I1004 04:25:08.799881   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.799892   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:08.799903   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:08.799954   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:08.835264   67282 cri.go:89] found id: ""
	I1004 04:25:08.835296   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.835308   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:08.835318   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:08.835330   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:08.875501   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:08.875532   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:08.929145   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:08.929178   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:08.942769   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:08.942808   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:09.025372   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:09.025401   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:09.025416   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:11.611179   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:11.625118   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:11.625253   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:11.661512   67282 cri.go:89] found id: ""
	I1004 04:25:11.661540   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.661547   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:11.661553   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:11.661607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:11.704902   67282 cri.go:89] found id: ""
	I1004 04:25:11.704931   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.704941   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:11.704948   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:11.705007   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:11.741747   67282 cri.go:89] found id: ""
	I1004 04:25:11.741770   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.741780   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:11.741787   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:11.741841   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:11.776838   67282 cri.go:89] found id: ""
	I1004 04:25:11.776863   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.776871   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:11.776876   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:11.776927   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:11.812996   67282 cri.go:89] found id: ""
	I1004 04:25:11.813024   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.813033   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:11.813038   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:11.813097   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:11.853718   67282 cri.go:89] found id: ""
	I1004 04:25:11.853744   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.853752   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:11.853758   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:11.853813   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:11.896840   67282 cri.go:89] found id: ""
	I1004 04:25:11.896867   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.896879   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:11.896885   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:11.896943   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:11.932529   67282 cri.go:89] found id: ""
	I1004 04:25:11.932552   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.932561   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:11.932569   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:11.932580   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:11.946504   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:11.946538   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:12.024692   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:12.024713   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:12.024724   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:12.111942   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:12.111976   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:12.156483   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:12.156522   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:10.049912   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.051024   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.646058   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:16.647214   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.123343   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:16.622947   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.708243   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:14.722943   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:14.723007   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:14.758502   67282 cri.go:89] found id: ""
	I1004 04:25:14.758555   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.758567   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:14.758575   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:14.758633   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:14.796496   67282 cri.go:89] found id: ""
	I1004 04:25:14.796525   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.796532   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:14.796538   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:14.796595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:14.832216   67282 cri.go:89] found id: ""
	I1004 04:25:14.832247   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.832259   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:14.832266   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:14.832330   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:14.868461   67282 cri.go:89] found id: ""
	I1004 04:25:14.868491   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.868501   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:14.868509   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:14.868568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:14.909827   67282 cri.go:89] found id: ""
	I1004 04:25:14.909857   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.909867   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:14.909875   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:14.909949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:14.947809   67282 cri.go:89] found id: ""
	I1004 04:25:14.947839   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.947850   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:14.947857   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:14.947904   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:14.984073   67282 cri.go:89] found id: ""
	I1004 04:25:14.984101   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.984110   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:14.984115   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:14.984170   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:15.021145   67282 cri.go:89] found id: ""
	I1004 04:25:15.021179   67282 logs.go:282] 0 containers: []
	W1004 04:25:15.021191   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:15.021204   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:15.021217   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:15.075295   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:15.075328   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:15.088953   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:15.088980   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:15.175103   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:15.175128   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:15.175143   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:15.259004   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:15.259044   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:17.825029   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:17.839496   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:17.839574   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:17.877643   67282 cri.go:89] found id: ""
	I1004 04:25:17.877673   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.877684   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:17.877692   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:17.877751   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:17.921534   67282 cri.go:89] found id: ""
	I1004 04:25:17.921563   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.921574   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:17.921581   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:17.921634   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:17.961281   67282 cri.go:89] found id: ""
	I1004 04:25:17.961307   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.961315   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:17.961320   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:17.961386   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:18.001036   67282 cri.go:89] found id: ""
	I1004 04:25:18.001066   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.001078   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:18.001085   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:18.001156   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:18.043212   67282 cri.go:89] found id: ""
	I1004 04:25:18.043241   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.043252   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:18.043259   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:18.043319   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:18.082399   67282 cri.go:89] found id: ""
	I1004 04:25:18.082423   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.082430   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:18.082435   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:18.082493   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:18.120507   67282 cri.go:89] found id: ""
	I1004 04:25:18.120534   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.120544   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:18.120550   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:18.120605   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:18.156601   67282 cri.go:89] found id: ""
	I1004 04:25:18.156629   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.156640   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:18.156650   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:18.156663   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:18.198393   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:18.198424   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:18.250992   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:18.251032   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:18.267984   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:18.268015   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:18.343283   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:18.343303   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:18.343314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:14.549511   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:17.048940   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:19.051125   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:18.648462   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:21.146813   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.147244   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:18.624165   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:20.627159   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.123629   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:20.922578   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:20.938037   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:20.938122   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:20.978389   67282 cri.go:89] found id: ""
	I1004 04:25:20.978417   67282 logs.go:282] 0 containers: []
	W1004 04:25:20.978426   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:20.978431   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:20.978478   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:21.033490   67282 cri.go:89] found id: ""
	I1004 04:25:21.033520   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.033528   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:21.033533   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:21.033589   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:21.087168   67282 cri.go:89] found id: ""
	I1004 04:25:21.087198   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.087209   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:21.087216   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:21.087299   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:21.144327   67282 cri.go:89] found id: ""
	I1004 04:25:21.144356   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.144366   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:21.144373   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:21.144431   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:21.183336   67282 cri.go:89] found id: ""
	I1004 04:25:21.183378   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.183390   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:21.183397   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:21.183459   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:21.221847   67282 cri.go:89] found id: ""
	I1004 04:25:21.221878   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.221892   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:21.221901   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:21.221961   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:21.258542   67282 cri.go:89] found id: ""
	I1004 04:25:21.258573   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.258584   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:21.258590   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:21.258652   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:21.303173   67282 cri.go:89] found id: ""
	I1004 04:25:21.303202   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.303211   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:21.303218   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:21.303243   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:21.358109   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:21.358146   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:21.373958   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:21.373987   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:21.450956   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:21.450980   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:21.451006   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:21.534763   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:21.534807   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:21.550109   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.550304   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:25.148868   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:27.647698   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:25.622123   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:27.624777   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:24.082856   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:24.098263   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:24.098336   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:24.144969   67282 cri.go:89] found id: ""
	I1004 04:25:24.144999   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.145009   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:24.145015   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:24.145072   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:24.185670   67282 cri.go:89] found id: ""
	I1004 04:25:24.185693   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.185702   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:24.185708   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:24.185769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:24.223657   67282 cri.go:89] found id: ""
	I1004 04:25:24.223691   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.223703   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:24.223710   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:24.223769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:24.261841   67282 cri.go:89] found id: ""
	I1004 04:25:24.261864   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.261872   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:24.261878   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:24.261938   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:24.299734   67282 cri.go:89] found id: ""
	I1004 04:25:24.299758   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.299769   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:24.299775   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:24.299867   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:24.337413   67282 cri.go:89] found id: ""
	I1004 04:25:24.337440   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.337450   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:24.337457   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:24.337523   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:24.375963   67282 cri.go:89] found id: ""
	I1004 04:25:24.375995   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.376007   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:24.376014   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:24.376073   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:24.415978   67282 cri.go:89] found id: ""
	I1004 04:25:24.416010   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.416021   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:24.416030   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:24.416045   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:24.458703   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:24.458738   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:24.510669   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:24.510704   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:24.525646   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:24.525687   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:24.603280   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:24.603310   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:24.603324   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:27.184935   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:27.200241   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:27.200321   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:27.237546   67282 cri.go:89] found id: ""
	I1004 04:25:27.237576   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.237588   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:27.237596   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:27.237653   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:27.272598   67282 cri.go:89] found id: ""
	I1004 04:25:27.272625   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.272634   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:27.272642   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:27.272700   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:27.306659   67282 cri.go:89] found id: ""
	I1004 04:25:27.306693   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.306706   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:27.306715   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:27.306779   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:27.344315   67282 cri.go:89] found id: ""
	I1004 04:25:27.344349   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.344363   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:27.344370   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:27.344428   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:27.380231   67282 cri.go:89] found id: ""
	I1004 04:25:27.380267   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.380278   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:27.380286   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:27.380346   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:27.418137   67282 cri.go:89] found id: ""
	I1004 04:25:27.418161   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.418169   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:27.418174   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:27.418225   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:27.458235   67282 cri.go:89] found id: ""
	I1004 04:25:27.458262   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.458283   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:27.458289   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:27.458342   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:27.495161   67282 cri.go:89] found id: ""
	I1004 04:25:27.495189   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.495198   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:27.495206   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:27.495217   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:27.547749   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:27.547795   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:27.563322   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:27.563355   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:27.636682   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:27.636710   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:27.636725   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:27.711316   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:27.711354   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:26.050001   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:28.548322   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.147210   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:32.647027   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.122267   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:32.122501   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.250361   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:30.265789   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:30.265866   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:30.305127   67282 cri.go:89] found id: ""
	I1004 04:25:30.305166   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.305183   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:30.305190   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:30.305258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:30.346529   67282 cri.go:89] found id: ""
	I1004 04:25:30.346560   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.346570   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:30.346577   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:30.346641   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:30.387368   67282 cri.go:89] found id: ""
	I1004 04:25:30.387407   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.387418   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:30.387425   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:30.387489   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:30.428193   67282 cri.go:89] found id: ""
	I1004 04:25:30.428230   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.428242   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:30.428248   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:30.428308   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:30.465484   67282 cri.go:89] found id: ""
	I1004 04:25:30.465509   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.465518   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:30.465523   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:30.465573   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:30.501133   67282 cri.go:89] found id: ""
	I1004 04:25:30.501163   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.501174   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:30.501181   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:30.501248   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:30.536492   67282 cri.go:89] found id: ""
	I1004 04:25:30.536519   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.536530   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:30.536536   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:30.536587   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:30.571721   67282 cri.go:89] found id: ""
	I1004 04:25:30.571745   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.571753   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:30.571761   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:30.571771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:30.626922   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:30.626958   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:30.641817   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:30.641852   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:30.725604   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:30.725633   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:30.725647   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:30.800359   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:30.800393   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:33.340747   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:33.355862   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:33.355936   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:33.397628   67282 cri.go:89] found id: ""
	I1004 04:25:33.397655   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.397662   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:33.397668   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:33.397718   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:33.442100   67282 cri.go:89] found id: ""
	I1004 04:25:33.442128   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.442137   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:33.442142   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:33.442187   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:33.481035   67282 cri.go:89] found id: ""
	I1004 04:25:33.481063   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.481076   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:33.481083   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:33.481149   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:30.549816   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:33.048791   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:35.147125   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:37.647224   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:34.122573   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:36.622639   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:33.516633   67282 cri.go:89] found id: ""
	I1004 04:25:33.516661   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.516669   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:33.516677   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:33.516727   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:33.556569   67282 cri.go:89] found id: ""
	I1004 04:25:33.556600   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.556610   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:33.556617   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:33.556679   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:33.591678   67282 cri.go:89] found id: ""
	I1004 04:25:33.591715   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.591724   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:33.591731   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:33.591786   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:33.626571   67282 cri.go:89] found id: ""
	I1004 04:25:33.626594   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.626602   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:33.626607   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:33.626650   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:33.664336   67282 cri.go:89] found id: ""
	I1004 04:25:33.664359   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.664367   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:33.664375   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:33.664386   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:33.748013   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:33.748047   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:33.786730   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:33.786767   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:33.839355   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:33.839392   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:33.853807   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:33.853835   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:33.920183   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:36.420485   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:36.435150   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:36.435221   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:36.471818   67282 cri.go:89] found id: ""
	I1004 04:25:36.471842   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.471850   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:36.471855   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:36.471908   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:36.511469   67282 cri.go:89] found id: ""
	I1004 04:25:36.511496   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.511504   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:36.511509   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:36.511557   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:36.552607   67282 cri.go:89] found id: ""
	I1004 04:25:36.552633   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.552641   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:36.552646   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:36.552702   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:36.596260   67282 cri.go:89] found id: ""
	I1004 04:25:36.596282   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.596290   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:36.596295   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:36.596340   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:36.636674   67282 cri.go:89] found id: ""
	I1004 04:25:36.636700   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.636708   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:36.636713   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:36.636764   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:36.675155   67282 cri.go:89] found id: ""
	I1004 04:25:36.675194   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.675206   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:36.675214   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:36.675279   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:36.713458   67282 cri.go:89] found id: ""
	I1004 04:25:36.713485   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.713493   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:36.713498   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:36.713552   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:36.754567   67282 cri.go:89] found id: ""
	I1004 04:25:36.754596   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.754607   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:36.754618   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:36.754631   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:36.824413   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:36.824439   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:36.824453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:36.900438   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:36.900471   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:36.942238   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:36.942264   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:36.992527   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:36.992556   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:35.050546   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:37.548965   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:39.647505   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:42.146720   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:38.623559   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:41.121785   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:43.122437   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:39.506599   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:39.520782   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:39.520854   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:39.561853   67282 cri.go:89] found id: ""
	I1004 04:25:39.561880   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.561891   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:39.561898   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:39.561955   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:39.597548   67282 cri.go:89] found id: ""
	I1004 04:25:39.597581   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.597591   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:39.597598   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:39.597659   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:39.634481   67282 cri.go:89] found id: ""
	I1004 04:25:39.634517   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.634525   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:39.634530   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:39.634575   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:39.677077   67282 cri.go:89] found id: ""
	I1004 04:25:39.677107   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.677117   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:39.677124   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:39.677185   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:39.716334   67282 cri.go:89] found id: ""
	I1004 04:25:39.716356   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.716364   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:39.716369   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:39.716416   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:39.754765   67282 cri.go:89] found id: ""
	I1004 04:25:39.754792   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.754803   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:39.754810   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:39.754863   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:39.788782   67282 cri.go:89] found id: ""
	I1004 04:25:39.788811   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.788824   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:39.788832   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:39.788890   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:39.821946   67282 cri.go:89] found id: ""
	I1004 04:25:39.821970   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.821979   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:39.821988   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:39.822001   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:39.892629   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:39.892657   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:39.892674   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:39.973480   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:39.973515   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:40.018175   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:40.018203   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:40.068585   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:40.068620   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:42.583639   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:42.597249   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:42.597333   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:42.631993   67282 cri.go:89] found id: ""
	I1004 04:25:42.632020   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.632030   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:42.632037   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:42.632091   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:42.669708   67282 cri.go:89] found id: ""
	I1004 04:25:42.669739   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.669749   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:42.669762   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:42.669836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:42.705995   67282 cri.go:89] found id: ""
	I1004 04:25:42.706019   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.706030   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:42.706037   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:42.706094   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:42.740436   67282 cri.go:89] found id: ""
	I1004 04:25:42.740458   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.740466   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:42.740472   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:42.740524   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:42.774516   67282 cri.go:89] found id: ""
	I1004 04:25:42.774546   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.774557   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:42.774564   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:42.774614   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:42.807471   67282 cri.go:89] found id: ""
	I1004 04:25:42.807502   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.807510   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:42.807516   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:42.807561   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:42.851943   67282 cri.go:89] found id: ""
	I1004 04:25:42.851968   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.851977   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:42.851983   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:42.852040   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:42.887762   67282 cri.go:89] found id: ""
	I1004 04:25:42.887801   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.887812   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:42.887822   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:42.887834   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:42.960398   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:42.960423   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:42.960440   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:43.040078   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:43.040117   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:43.081614   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:43.081638   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:43.132744   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:43.132781   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:39.551722   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:42.049418   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:44.049835   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:44.646919   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:47.146884   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:45.622878   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.122299   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:45.647332   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:45.660765   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:45.660834   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:45.696351   67282 cri.go:89] found id: ""
	I1004 04:25:45.696379   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.696390   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:45.696397   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:45.696449   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:45.738529   67282 cri.go:89] found id: ""
	I1004 04:25:45.738553   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.738561   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:45.738566   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:45.738621   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:45.773071   67282 cri.go:89] found id: ""
	I1004 04:25:45.773094   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.773103   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:45.773110   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:45.773165   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:45.810813   67282 cri.go:89] found id: ""
	I1004 04:25:45.810840   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.810852   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:45.810859   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:45.810913   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:45.848916   67282 cri.go:89] found id: ""
	I1004 04:25:45.848942   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.848951   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:45.848956   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:45.849014   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:45.886737   67282 cri.go:89] found id: ""
	I1004 04:25:45.886763   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.886772   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:45.886778   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:45.886825   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:45.922263   67282 cri.go:89] found id: ""
	I1004 04:25:45.922291   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.922301   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:45.922307   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:45.922364   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:45.956688   67282 cri.go:89] found id: ""
	I1004 04:25:45.956710   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.956718   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:45.956725   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:45.956737   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:46.007334   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:46.007365   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:46.020892   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:46.020916   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:46.089786   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:46.089809   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:46.089822   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:46.175987   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:46.176017   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:46.549153   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.549893   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:49.147322   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:51.647365   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:50.622540   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:52.623714   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.718354   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:48.733291   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:48.733347   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:48.769149   67282 cri.go:89] found id: ""
	I1004 04:25:48.769175   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.769185   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:48.769193   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:48.769249   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:48.804386   67282 cri.go:89] found id: ""
	I1004 04:25:48.804410   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.804418   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:48.804423   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:48.804467   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:48.841747   67282 cri.go:89] found id: ""
	I1004 04:25:48.841774   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.841782   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:48.841788   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:48.841836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:48.880025   67282 cri.go:89] found id: ""
	I1004 04:25:48.880048   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.880058   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:48.880064   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:48.880121   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:48.916506   67282 cri.go:89] found id: ""
	I1004 04:25:48.916530   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.916540   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:48.916547   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:48.916607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:48.952082   67282 cri.go:89] found id: ""
	I1004 04:25:48.952105   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.952116   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:48.952122   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:48.952177   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:48.986097   67282 cri.go:89] found id: ""
	I1004 04:25:48.986124   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.986135   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:48.986143   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:48.986210   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:49.020400   67282 cri.go:89] found id: ""
	I1004 04:25:49.020428   67282 logs.go:282] 0 containers: []
	W1004 04:25:49.020436   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:49.020445   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:49.020462   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:49.074724   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:49.074754   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:49.088504   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:49.088529   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:49.165940   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:49.165961   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:49.165972   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:49.244482   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:49.244519   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:51.786086   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:51.800644   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:51.800720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:51.839951   67282 cri.go:89] found id: ""
	I1004 04:25:51.839980   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.839990   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:51.839997   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:51.840055   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:51.878660   67282 cri.go:89] found id: ""
	I1004 04:25:51.878684   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.878695   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:51.878701   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:51.878762   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:51.916640   67282 cri.go:89] found id: ""
	I1004 04:25:51.916665   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.916672   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:51.916678   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:51.916725   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:51.953800   67282 cri.go:89] found id: ""
	I1004 04:25:51.953827   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.953835   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:51.953840   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:51.953897   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:51.993107   67282 cri.go:89] found id: ""
	I1004 04:25:51.993139   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.993150   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:51.993157   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:51.993214   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:52.027426   67282 cri.go:89] found id: ""
	I1004 04:25:52.027454   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.027464   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:52.027470   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:52.027521   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:52.063608   67282 cri.go:89] found id: ""
	I1004 04:25:52.063638   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.063650   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:52.063657   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:52.063717   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:52.100052   67282 cri.go:89] found id: ""
	I1004 04:25:52.100083   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.100094   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:52.100106   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:52.100125   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:52.113801   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:52.113827   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:52.201284   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:52.201311   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:52.201322   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:52.280014   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:52.280047   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:52.318120   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:52.318145   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:51.048719   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:53.050304   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:54.146697   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:56.147015   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:58.148736   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:55.122546   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:57.123051   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:54.872245   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:54.886914   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:54.886990   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:54.927117   67282 cri.go:89] found id: ""
	I1004 04:25:54.927144   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.927152   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:54.927157   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:54.927205   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:54.962510   67282 cri.go:89] found id: ""
	I1004 04:25:54.962540   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.962552   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:54.962559   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:54.962619   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:54.996812   67282 cri.go:89] found id: ""
	I1004 04:25:54.996839   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.996848   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:54.996854   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:54.996905   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:55.034557   67282 cri.go:89] found id: ""
	I1004 04:25:55.034587   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.034597   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:55.034605   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:55.034667   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:55.072383   67282 cri.go:89] found id: ""
	I1004 04:25:55.072416   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.072427   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:55.072434   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:55.072494   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:55.121561   67282 cri.go:89] found id: ""
	I1004 04:25:55.121588   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.121598   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:55.121604   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:55.121775   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:55.165525   67282 cri.go:89] found id: ""
	I1004 04:25:55.165553   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.165564   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:55.165570   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:55.165627   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:55.201808   67282 cri.go:89] found id: ""
	I1004 04:25:55.201836   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.201846   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:55.201857   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:55.201870   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:55.280889   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:55.280917   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:55.280932   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:55.354979   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:55.355012   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:55.397144   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:55.397174   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:55.448710   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:55.448746   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:57.963840   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:57.977027   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:57.977085   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:58.019244   67282 cri.go:89] found id: ""
	I1004 04:25:58.019273   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.019285   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:58.019293   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:58.019351   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:58.057979   67282 cri.go:89] found id: ""
	I1004 04:25:58.058008   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.058018   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:58.058027   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:58.058084   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:58.094607   67282 cri.go:89] found id: ""
	I1004 04:25:58.094639   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.094652   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:58.094658   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:58.094726   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:58.130150   67282 cri.go:89] found id: ""
	I1004 04:25:58.130177   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.130188   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:58.130196   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:58.130259   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:58.167662   67282 cri.go:89] found id: ""
	I1004 04:25:58.167691   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.167701   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:58.167709   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:58.167769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:58.203480   67282 cri.go:89] found id: ""
	I1004 04:25:58.203568   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.203585   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:58.203594   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:58.203662   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:58.239516   67282 cri.go:89] found id: ""
	I1004 04:25:58.239537   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.239545   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:58.239551   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:58.239595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:58.275525   67282 cri.go:89] found id: ""
	I1004 04:25:58.275553   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.275564   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:58.275574   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:58.275587   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:58.331191   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:58.331224   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:58.345629   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:58.345659   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:58.416297   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:58.416315   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:58.416326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:58.490659   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:58.490694   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:55.548913   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:57.549457   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:00.647858   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.146570   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:59.623396   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.624074   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.030058   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:01.044568   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:01.044659   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:01.082652   67282 cri.go:89] found id: ""
	I1004 04:26:01.082679   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.082688   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:01.082694   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:01.082750   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:01.120781   67282 cri.go:89] found id: ""
	I1004 04:26:01.120805   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.120814   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:01.120821   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:01.120878   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:01.159494   67282 cri.go:89] found id: ""
	I1004 04:26:01.159523   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.159531   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:01.159537   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:01.159584   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:01.195482   67282 cri.go:89] found id: ""
	I1004 04:26:01.195512   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.195521   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:01.195529   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:01.195589   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:01.233971   67282 cri.go:89] found id: ""
	I1004 04:26:01.233996   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.234006   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:01.234014   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:01.234076   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:01.275935   67282 cri.go:89] found id: ""
	I1004 04:26:01.275958   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.275966   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:01.275971   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:01.276018   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:01.315512   67282 cri.go:89] found id: ""
	I1004 04:26:01.315535   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.315543   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:01.315548   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:01.315603   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:01.356465   67282 cri.go:89] found id: ""
	I1004 04:26:01.356491   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.356505   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:01.356513   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:01.356523   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:01.409237   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:01.409280   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:01.423426   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:01.423453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:01.501372   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:01.501397   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:01.501413   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:01.591087   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:01.591131   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:59.549485   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.550138   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.550258   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:05.646818   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:07.647322   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.634636   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:06.122840   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:04.152506   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:04.166847   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:04.166911   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:04.203138   67282 cri.go:89] found id: ""
	I1004 04:26:04.203167   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.203177   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:04.203184   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:04.203243   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:04.237427   67282 cri.go:89] found id: ""
	I1004 04:26:04.237453   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.237464   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:04.237471   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:04.237525   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:04.272468   67282 cri.go:89] found id: ""
	I1004 04:26:04.272499   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.272511   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:04.272518   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:04.272584   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:04.307347   67282 cri.go:89] found id: ""
	I1004 04:26:04.307373   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.307384   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:04.307390   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:04.307448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:04.342450   67282 cri.go:89] found id: ""
	I1004 04:26:04.342487   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.342498   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:04.342506   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:04.342568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:04.382846   67282 cri.go:89] found id: ""
	I1004 04:26:04.382874   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.382885   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:04.382893   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:04.382945   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:04.418234   67282 cri.go:89] found id: ""
	I1004 04:26:04.418260   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.418268   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:04.418273   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:04.418328   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:04.453433   67282 cri.go:89] found id: ""
	I1004 04:26:04.453456   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.453464   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:04.453473   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:04.453487   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:04.502093   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:04.502123   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:04.515865   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:04.515897   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:04.595672   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:04.595698   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:04.595713   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:04.675273   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:04.675304   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:07.214965   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:07.229495   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:07.229568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:07.268541   67282 cri.go:89] found id: ""
	I1004 04:26:07.268580   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.268591   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:07.268599   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:07.268662   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:07.321382   67282 cri.go:89] found id: ""
	I1004 04:26:07.321414   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.321424   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:07.321431   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:07.321490   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:07.379840   67282 cri.go:89] found id: ""
	I1004 04:26:07.379869   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.379878   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:07.379884   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:07.379928   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:07.431304   67282 cri.go:89] found id: ""
	I1004 04:26:07.431333   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.431343   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:07.431349   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:07.431407   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:07.466853   67282 cri.go:89] found id: ""
	I1004 04:26:07.466880   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.466888   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:07.466893   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:07.466951   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:07.501587   67282 cri.go:89] found id: ""
	I1004 04:26:07.501613   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.501624   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:07.501630   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:07.501685   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:07.536326   67282 cri.go:89] found id: ""
	I1004 04:26:07.536354   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.536364   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:07.536371   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:07.536426   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:07.575257   67282 cri.go:89] found id: ""
	I1004 04:26:07.575283   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.575292   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:07.575299   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:07.575310   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:07.629477   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:07.629515   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:07.643294   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:07.643326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:07.720324   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:07.720350   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:07.720365   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:07.797641   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:07.797678   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:06.049580   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:08.548786   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.146544   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:12.146842   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:08.622497   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.622759   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:12.624285   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.339392   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:10.353341   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:10.353397   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:10.391023   67282 cri.go:89] found id: ""
	I1004 04:26:10.391049   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.391059   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:10.391066   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:10.391129   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:10.424345   67282 cri.go:89] found id: ""
	I1004 04:26:10.424376   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.424388   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:10.424396   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:10.424466   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:10.459344   67282 cri.go:89] found id: ""
	I1004 04:26:10.459374   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.459387   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:10.459394   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:10.459451   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:10.494898   67282 cri.go:89] found id: ""
	I1004 04:26:10.494921   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.494929   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:10.494935   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:10.494982   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:10.531084   67282 cri.go:89] found id: ""
	I1004 04:26:10.531111   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.531122   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:10.531129   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:10.531185   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:10.566918   67282 cri.go:89] found id: ""
	I1004 04:26:10.566949   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.566960   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:10.566967   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:10.567024   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:10.604888   67282 cri.go:89] found id: ""
	I1004 04:26:10.604923   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.604935   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:10.604942   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:10.605013   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:10.641578   67282 cri.go:89] found id: ""
	I1004 04:26:10.641606   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.641620   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:10.641631   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:10.641648   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:10.696848   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:10.696882   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:10.710393   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:10.710417   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:10.780854   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:10.780881   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:10.780895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:10.861732   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:10.861771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:13.403231   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:13.417246   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:13.417319   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:13.451581   67282 cri.go:89] found id: ""
	I1004 04:26:13.451607   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.451616   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:13.451621   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:13.451681   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:13.488362   67282 cri.go:89] found id: ""
	I1004 04:26:13.488388   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.488396   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:13.488401   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:13.488449   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:10.549905   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:13.048997   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:14.646627   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:16.647879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:15.123067   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:17.622729   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:13.522697   67282 cri.go:89] found id: ""
	I1004 04:26:13.522729   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.522740   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:13.522751   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:13.522803   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:13.564926   67282 cri.go:89] found id: ""
	I1004 04:26:13.564959   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.564972   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:13.564981   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:13.565058   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:13.600582   67282 cri.go:89] found id: ""
	I1004 04:26:13.600612   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.600622   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:13.600630   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:13.600688   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:13.634550   67282 cri.go:89] found id: ""
	I1004 04:26:13.634575   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.634584   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:13.634591   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:13.634646   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:13.669281   67282 cri.go:89] found id: ""
	I1004 04:26:13.669311   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.669320   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:13.669326   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:13.669388   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:13.707664   67282 cri.go:89] found id: ""
	I1004 04:26:13.707693   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.707703   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:13.707713   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:13.707727   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:13.721127   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:13.721168   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:13.788026   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:13.788051   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:13.788067   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:13.864505   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:13.864542   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:13.902896   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:13.902921   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:16.456813   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:16.470071   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:16.470138   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:16.506085   67282 cri.go:89] found id: ""
	I1004 04:26:16.506114   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.506125   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:16.506133   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:16.506189   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:16.540016   67282 cri.go:89] found id: ""
	I1004 04:26:16.540044   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.540052   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:16.540056   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:16.540100   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:16.579247   67282 cri.go:89] found id: ""
	I1004 04:26:16.579272   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.579280   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:16.579285   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:16.579332   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:16.615552   67282 cri.go:89] found id: ""
	I1004 04:26:16.615579   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.615601   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:16.615621   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:16.615675   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:16.652639   67282 cri.go:89] found id: ""
	I1004 04:26:16.652660   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.652671   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:16.652678   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:16.652732   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:16.689607   67282 cri.go:89] found id: ""
	I1004 04:26:16.689631   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.689643   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:16.689650   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:16.689720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:16.724430   67282 cri.go:89] found id: ""
	I1004 04:26:16.724458   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.724469   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:16.724475   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:16.724534   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:16.758378   67282 cri.go:89] found id: ""
	I1004 04:26:16.758412   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.758423   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:16.758434   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:16.758454   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:16.826234   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:16.826259   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:16.826273   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:16.906908   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:16.906945   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:16.950295   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:16.950321   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:17.002216   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:17.002253   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:15.549441   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:17.549816   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.147105   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:21.147403   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.622982   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:21.624073   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.516253   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:19.529664   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:19.529726   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:19.566669   67282 cri.go:89] found id: ""
	I1004 04:26:19.566700   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.566711   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:19.566718   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:19.566772   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:19.605923   67282 cri.go:89] found id: ""
	I1004 04:26:19.605951   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.605961   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:19.605968   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:19.606025   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:19.645132   67282 cri.go:89] found id: ""
	I1004 04:26:19.645158   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.645168   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:19.645175   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:19.645235   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:19.687135   67282 cri.go:89] found id: ""
	I1004 04:26:19.687160   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.687171   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:19.687178   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:19.687256   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:19.724180   67282 cri.go:89] found id: ""
	I1004 04:26:19.724213   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.724224   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:19.724230   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:19.724295   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:19.761608   67282 cri.go:89] found id: ""
	I1004 04:26:19.761638   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.761649   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:19.761656   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:19.761714   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:19.795060   67282 cri.go:89] found id: ""
	I1004 04:26:19.795089   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.795099   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:19.795106   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:19.795164   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:19.835678   67282 cri.go:89] found id: ""
	I1004 04:26:19.835703   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.835712   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:19.835722   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:19.835736   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:19.889508   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:19.889543   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:19.903206   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:19.903233   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:19.973445   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:19.973471   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:19.973485   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:20.053996   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:20.054034   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:22.594171   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:22.609084   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:22.609145   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:22.650423   67282 cri.go:89] found id: ""
	I1004 04:26:22.650449   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.650459   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:22.650466   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:22.650525   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:22.686420   67282 cri.go:89] found id: ""
	I1004 04:26:22.686450   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.686461   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:22.686469   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:22.686535   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:22.721385   67282 cri.go:89] found id: ""
	I1004 04:26:22.721408   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.721416   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:22.721421   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:22.721484   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:22.765461   67282 cri.go:89] found id: ""
	I1004 04:26:22.765492   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.765504   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:22.765511   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:22.765569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:22.798192   67282 cri.go:89] found id: ""
	I1004 04:26:22.798220   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.798230   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:22.798235   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:22.798293   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:22.833110   67282 cri.go:89] found id: ""
	I1004 04:26:22.833138   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.833147   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:22.833153   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:22.833212   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:22.875653   67282 cri.go:89] found id: ""
	I1004 04:26:22.875684   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.875696   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:22.875704   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:22.875766   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:22.913906   67282 cri.go:89] found id: ""
	I1004 04:26:22.913931   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.913938   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:22.913946   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:22.913957   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:22.969480   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:22.969511   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:22.983475   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:22.983500   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:23.059953   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:23.059982   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:23.059996   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:23.139106   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:23.139134   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:19.550307   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:22.048618   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:23.647507   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:26.147135   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:24.122370   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:26.122976   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:25.678489   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:25.692648   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:25.692705   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:25.728232   67282 cri.go:89] found id: ""
	I1004 04:26:25.728261   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.728269   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:25.728276   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:25.728335   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:25.763956   67282 cri.go:89] found id: ""
	I1004 04:26:25.763982   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.763991   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:25.763998   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:25.764057   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:25.799715   67282 cri.go:89] found id: ""
	I1004 04:26:25.799743   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.799753   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:25.799761   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:25.799840   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:25.834823   67282 cri.go:89] found id: ""
	I1004 04:26:25.834855   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.834866   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:25.834873   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:25.834933   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:25.869194   67282 cri.go:89] found id: ""
	I1004 04:26:25.869224   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.869235   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:25.869242   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:25.869303   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:25.903514   67282 cri.go:89] found id: ""
	I1004 04:26:25.903543   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.903553   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:25.903558   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:25.903606   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:25.939887   67282 cri.go:89] found id: ""
	I1004 04:26:25.939919   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.939930   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:25.939938   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:25.939996   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:25.981922   67282 cri.go:89] found id: ""
	I1004 04:26:25.981944   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.981952   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:25.981960   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:25.981971   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:26.064860   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:26.064891   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:26.105272   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:26.105296   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:26.162602   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:26.162640   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:26.176408   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:26.176439   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:26.242264   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:24.549364   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:27.049470   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.646788   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.146205   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:33.146879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.622691   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.122181   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:33.123226   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.742417   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:28.755655   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:28.755723   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:28.789338   67282 cri.go:89] found id: ""
	I1004 04:26:28.789361   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.789369   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:28.789374   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:28.789420   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:28.823513   67282 cri.go:89] found id: ""
	I1004 04:26:28.823544   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.823555   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:28.823562   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:28.823619   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:28.858826   67282 cri.go:89] found id: ""
	I1004 04:26:28.858854   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.858866   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:28.858873   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:28.858927   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:28.892552   67282 cri.go:89] found id: ""
	I1004 04:26:28.892579   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.892587   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:28.892593   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:28.892639   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:28.929250   67282 cri.go:89] found id: ""
	I1004 04:26:28.929277   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.929284   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:28.929289   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:28.929335   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:28.966554   67282 cri.go:89] found id: ""
	I1004 04:26:28.966581   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.966589   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:28.966594   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:28.966642   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:28.999930   67282 cri.go:89] found id: ""
	I1004 04:26:28.999954   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.999964   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:28.999970   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:29.000025   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:29.033687   67282 cri.go:89] found id: ""
	I1004 04:26:29.033717   67282 logs.go:282] 0 containers: []
	W1004 04:26:29.033727   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:29.033737   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:29.033752   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:29.109486   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:29.109523   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:29.149125   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:29.149152   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:29.197830   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:29.197861   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:29.211182   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:29.211204   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:29.276808   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:31.777659   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:31.791374   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:31.791425   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:31.825453   67282 cri.go:89] found id: ""
	I1004 04:26:31.825480   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.825489   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:31.825495   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:31.825553   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:31.857845   67282 cri.go:89] found id: ""
	I1004 04:26:31.857875   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.857884   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:31.857893   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:31.857949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:31.892282   67282 cri.go:89] found id: ""
	I1004 04:26:31.892309   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.892317   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:31.892322   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:31.892366   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:31.926016   67282 cri.go:89] found id: ""
	I1004 04:26:31.926037   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.926045   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:31.926051   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:31.926094   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:31.961382   67282 cri.go:89] found id: ""
	I1004 04:26:31.961415   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.961425   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:31.961433   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:31.961492   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:31.994570   67282 cri.go:89] found id: ""
	I1004 04:26:31.994602   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.994613   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:31.994620   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:31.994675   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:32.027359   67282 cri.go:89] found id: ""
	I1004 04:26:32.027383   67282 logs.go:282] 0 containers: []
	W1004 04:26:32.027391   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:32.027397   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:32.027448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:32.063518   67282 cri.go:89] found id: ""
	I1004 04:26:32.063545   67282 logs.go:282] 0 containers: []
	W1004 04:26:32.063555   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:32.063565   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:32.063577   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:32.151555   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:32.151582   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:32.190678   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:32.190700   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:32.243567   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:32.243596   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:32.256293   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:32.256320   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:32.329513   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:29.548687   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.550184   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:34.050659   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:35.147870   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:37.646571   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:35.623302   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:38.122555   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:34.830126   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:34.844760   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:34.844833   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:34.878409   67282 cri.go:89] found id: ""
	I1004 04:26:34.878433   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.878440   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:34.878445   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:34.878500   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:34.916493   67282 cri.go:89] found id: ""
	I1004 04:26:34.916516   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.916524   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:34.916532   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:34.916577   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:34.954532   67282 cri.go:89] found id: ""
	I1004 04:26:34.954556   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.954565   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:34.954570   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:34.954616   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:34.987163   67282 cri.go:89] found id: ""
	I1004 04:26:34.987190   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.987198   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:34.987205   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:34.987261   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:35.021351   67282 cri.go:89] found id: ""
	I1004 04:26:35.021379   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.021388   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:35.021394   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:35.021452   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:35.056350   67282 cri.go:89] found id: ""
	I1004 04:26:35.056376   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.056384   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:35.056390   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:35.056448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:35.093375   67282 cri.go:89] found id: ""
	I1004 04:26:35.093402   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.093412   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:35.093420   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:35.093486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:35.130509   67282 cri.go:89] found id: ""
	I1004 04:26:35.130532   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.130541   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:35.130549   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:35.130562   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:35.188138   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:35.188174   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:35.202226   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:35.202267   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:35.276652   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:35.276675   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:35.276688   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:35.357339   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:35.357373   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:37.898166   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:37.911319   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:37.911387   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:37.944551   67282 cri.go:89] found id: ""
	I1004 04:26:37.944578   67282 logs.go:282] 0 containers: []
	W1004 04:26:37.944590   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:37.944597   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:37.944652   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:37.978066   67282 cri.go:89] found id: ""
	I1004 04:26:37.978093   67282 logs.go:282] 0 containers: []
	W1004 04:26:37.978101   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:37.978107   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:37.978163   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:38.011065   67282 cri.go:89] found id: ""
	I1004 04:26:38.011095   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.011104   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:38.011109   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:38.011156   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:38.050323   67282 cri.go:89] found id: ""
	I1004 04:26:38.050349   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.050359   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:38.050366   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:38.050425   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:38.089141   67282 cri.go:89] found id: ""
	I1004 04:26:38.089169   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.089177   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:38.089182   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:38.089258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:38.122625   67282 cri.go:89] found id: ""
	I1004 04:26:38.122653   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.122663   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:38.122671   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:38.122719   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:38.159957   67282 cri.go:89] found id: ""
	I1004 04:26:38.159982   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.159990   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:38.159996   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:38.160085   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:38.194592   67282 cri.go:89] found id: ""
	I1004 04:26:38.194618   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.194626   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:38.194646   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:38.194657   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:38.263914   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:38.263945   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:38.263958   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:38.339864   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:38.339895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:38.375477   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:38.375505   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:38.428292   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:38.428320   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:36.050815   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:38.548602   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:39.646794   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.146914   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:40.123280   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.623659   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:40.941910   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:40.955041   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:40.955117   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:40.991278   67282 cri.go:89] found id: ""
	I1004 04:26:40.991307   67282 logs.go:282] 0 containers: []
	W1004 04:26:40.991317   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:40.991325   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:40.991389   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:41.025347   67282 cri.go:89] found id: ""
	I1004 04:26:41.025373   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.025385   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:41.025392   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:41.025450   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:41.060974   67282 cri.go:89] found id: ""
	I1004 04:26:41.061001   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.061019   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:41.061026   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:41.061087   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:41.097557   67282 cri.go:89] found id: ""
	I1004 04:26:41.097587   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.097598   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:41.097605   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:41.097665   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:41.136371   67282 cri.go:89] found id: ""
	I1004 04:26:41.136396   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.136405   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:41.136412   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:41.136472   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:41.172590   67282 cri.go:89] found id: ""
	I1004 04:26:41.172617   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.172627   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:41.172634   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:41.172687   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:41.209124   67282 cri.go:89] found id: ""
	I1004 04:26:41.209146   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.209154   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:41.209159   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:41.209214   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:41.250654   67282 cri.go:89] found id: ""
	I1004 04:26:41.250687   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.250699   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:41.250709   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:41.250723   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:41.305814   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:41.305864   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:41.322961   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:41.322989   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:41.427611   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:41.427632   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:41.427648   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:41.505830   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:41.505877   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:40.549691   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.549838   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:44.647149   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.146894   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:45.122344   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.122706   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:44.050902   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:44.065277   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:44.065343   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:44.101089   67282 cri.go:89] found id: ""
	I1004 04:26:44.101110   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.101117   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:44.101123   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:44.101174   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:44.138570   67282 cri.go:89] found id: ""
	I1004 04:26:44.138593   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.138601   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:44.138606   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:44.138650   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:44.178423   67282 cri.go:89] found id: ""
	I1004 04:26:44.178456   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.178478   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:44.178486   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:44.178556   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:44.213301   67282 cri.go:89] found id: ""
	I1004 04:26:44.213330   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.213338   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:44.213344   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:44.213401   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:44.247653   67282 cri.go:89] found id: ""
	I1004 04:26:44.247681   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.247688   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:44.247694   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:44.247756   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:44.281667   67282 cri.go:89] found id: ""
	I1004 04:26:44.281693   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.281704   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:44.281711   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:44.281767   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:44.314637   67282 cri.go:89] found id: ""
	I1004 04:26:44.314667   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.314677   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:44.314684   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:44.314760   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:44.349432   67282 cri.go:89] found id: ""
	I1004 04:26:44.349459   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.349469   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:44.349479   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:44.349492   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:44.397134   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:44.397168   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:44.410708   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:44.410738   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:44.482025   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:44.482049   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:44.482065   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:44.562652   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:44.562699   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:47.101459   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:47.116923   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:47.117020   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:47.153495   67282 cri.go:89] found id: ""
	I1004 04:26:47.153524   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.153534   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:47.153541   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:47.153601   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:47.189976   67282 cri.go:89] found id: ""
	I1004 04:26:47.190004   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.190014   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:47.190023   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:47.190084   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:47.225712   67282 cri.go:89] found id: ""
	I1004 04:26:47.225740   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.225748   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:47.225754   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:47.225800   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:47.261565   67282 cri.go:89] found id: ""
	I1004 04:26:47.261593   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.261603   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:47.261608   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:47.261665   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:47.298152   67282 cri.go:89] found id: ""
	I1004 04:26:47.298204   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.298214   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:47.298223   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:47.298279   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:47.338226   67282 cri.go:89] found id: ""
	I1004 04:26:47.338253   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.338261   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:47.338267   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:47.338320   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:47.378859   67282 cri.go:89] found id: ""
	I1004 04:26:47.378892   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.378902   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:47.378909   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:47.378964   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:47.418161   67282 cri.go:89] found id: ""
	I1004 04:26:47.418186   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.418194   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:47.418203   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:47.418213   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:47.470271   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:47.470311   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:47.484416   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:47.484453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:47.556744   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:47.556767   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:47.556778   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:47.634266   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:47.634299   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:45.050501   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.550072   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:49.147562   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:51.648504   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:49.623375   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:52.122346   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:50.175746   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:50.191850   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:50.191945   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:50.229542   67282 cri.go:89] found id: ""
	I1004 04:26:50.229574   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.229584   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:50.229593   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:50.229655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:50.268401   67282 cri.go:89] found id: ""
	I1004 04:26:50.268432   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.268441   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:50.268449   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:50.268522   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:50.302927   67282 cri.go:89] found id: ""
	I1004 04:26:50.302954   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.302964   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:50.302969   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:50.303029   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:50.336617   67282 cri.go:89] found id: ""
	I1004 04:26:50.336646   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.336656   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:50.336663   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:50.336724   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:50.372871   67282 cri.go:89] found id: ""
	I1004 04:26:50.372901   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.372911   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:50.372918   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:50.372977   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:50.409601   67282 cri.go:89] found id: ""
	I1004 04:26:50.409629   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.409640   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:50.409648   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:50.409723   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:50.451899   67282 cri.go:89] found id: ""
	I1004 04:26:50.451927   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.451935   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:50.451940   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:50.451991   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:50.487306   67282 cri.go:89] found id: ""
	I1004 04:26:50.487332   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.487343   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:50.487353   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:50.487369   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:50.565167   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:50.565192   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:50.565207   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:50.646155   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:50.646194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:50.688459   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:50.688489   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:50.742416   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:50.742460   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:53.257063   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:53.270546   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:53.270618   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:53.306504   67282 cri.go:89] found id: ""
	I1004 04:26:53.306530   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.306538   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:53.306544   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:53.306594   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:53.343256   67282 cri.go:89] found id: ""
	I1004 04:26:53.343285   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.343293   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:53.343299   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:53.343352   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:53.380834   67282 cri.go:89] found id: ""
	I1004 04:26:53.380864   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.380873   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:53.380880   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:53.380940   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:53.417361   67282 cri.go:89] found id: ""
	I1004 04:26:53.417391   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.417404   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:53.417415   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:53.417479   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:53.451948   67282 cri.go:89] found id: ""
	I1004 04:26:53.451970   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.451978   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:53.451983   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:53.452039   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:53.487731   67282 cri.go:89] found id: ""
	I1004 04:26:53.487756   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.487764   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:53.487769   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:53.487836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:50.049952   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:52.050275   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:54.151420   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.647593   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:54.122386   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.623398   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:53.531549   67282 cri.go:89] found id: ""
	I1004 04:26:53.531573   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.531582   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:53.531587   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:53.531643   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:53.578123   67282 cri.go:89] found id: ""
	I1004 04:26:53.578151   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.578162   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:53.578180   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:53.578195   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:53.643062   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:53.643093   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:53.696157   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:53.696194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:53.709884   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:53.709910   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:53.791272   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:53.791297   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:53.791314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:56.371608   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:56.386293   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:56.386376   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:56.425531   67282 cri.go:89] found id: ""
	I1004 04:26:56.425560   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.425571   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:56.425578   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:56.425646   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:56.470293   67282 cri.go:89] found id: ""
	I1004 04:26:56.470326   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.470335   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:56.470340   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:56.470400   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:56.508927   67282 cri.go:89] found id: ""
	I1004 04:26:56.508955   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.508963   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:56.508968   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:56.509018   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:56.549149   67282 cri.go:89] found id: ""
	I1004 04:26:56.549178   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.549191   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:56.549199   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:56.549270   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:56.589412   67282 cri.go:89] found id: ""
	I1004 04:26:56.589441   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.589451   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:56.589459   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:56.589517   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:56.624732   67282 cri.go:89] found id: ""
	I1004 04:26:56.624760   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.624770   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:56.624776   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:56.624838   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:56.662385   67282 cri.go:89] found id: ""
	I1004 04:26:56.662413   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.662421   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:56.662427   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:56.662483   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:56.697982   67282 cri.go:89] found id: ""
	I1004 04:26:56.698014   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.698025   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:56.698036   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:56.698049   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:56.750597   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:56.750633   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:56.764884   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:56.764921   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:56.844404   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:56.844433   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:56.844451   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:56.924373   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:56.924406   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:54.548706   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.549763   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.049294   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:58.648470   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:01.146948   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.148357   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.123321   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:01.622391   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.466449   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:59.481897   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:59.481972   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:59.535384   67282 cri.go:89] found id: ""
	I1004 04:26:59.535411   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.535422   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:59.535428   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:59.535486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:59.595843   67282 cri.go:89] found id: ""
	I1004 04:26:59.595875   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.595886   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:59.595894   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:59.595954   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:59.641010   67282 cri.go:89] found id: ""
	I1004 04:26:59.641041   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.641049   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:59.641057   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:59.641102   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:59.679705   67282 cri.go:89] found id: ""
	I1004 04:26:59.679736   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.679746   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:59.679753   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:59.679828   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:59.715960   67282 cri.go:89] found id: ""
	I1004 04:26:59.715985   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.715993   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:59.715998   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:59.716047   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:59.757406   67282 cri.go:89] found id: ""
	I1004 04:26:59.757442   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.757453   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:59.757461   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:59.757528   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:59.792038   67282 cri.go:89] found id: ""
	I1004 04:26:59.792066   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.792076   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:59.792083   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:59.792141   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:59.830258   67282 cri.go:89] found id: ""
	I1004 04:26:59.830281   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.830289   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:59.830296   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:59.830308   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:59.877273   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:59.877304   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:59.932570   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:59.932610   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:59.945896   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:59.945919   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:00.020363   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:00.020392   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:00.020412   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:02.601022   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:02.615039   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:02.615112   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:02.654541   67282 cri.go:89] found id: ""
	I1004 04:27:02.654567   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.654574   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:02.654579   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:02.654638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:02.691313   67282 cri.go:89] found id: ""
	I1004 04:27:02.691338   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.691349   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:02.691355   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:02.691414   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:02.735337   67282 cri.go:89] found id: ""
	I1004 04:27:02.735367   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.735376   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:02.735383   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:02.735486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:02.769604   67282 cri.go:89] found id: ""
	I1004 04:27:02.769628   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.769638   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:02.769643   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:02.769704   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:02.812913   67282 cri.go:89] found id: ""
	I1004 04:27:02.812938   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.812949   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:02.812954   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:02.813020   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:02.849910   67282 cri.go:89] found id: ""
	I1004 04:27:02.849939   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.849949   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:02.849956   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:02.850023   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:02.889467   67282 cri.go:89] found id: ""
	I1004 04:27:02.889497   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.889509   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:02.889517   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:02.889575   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:02.928508   67282 cri.go:89] found id: ""
	I1004 04:27:02.928529   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.928537   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:02.928545   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:02.928556   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:02.942783   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:02.942821   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:03.018282   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:03.018304   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:03.018314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:03.101588   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:03.101622   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:03.149911   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:03.149937   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:01.051581   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.550066   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.646200   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:07.648479   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.622932   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.623005   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.121151   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.703125   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:05.717243   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:05.717303   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:05.752564   67282 cri.go:89] found id: ""
	I1004 04:27:05.752588   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.752597   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:05.752609   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:05.752656   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:05.786955   67282 cri.go:89] found id: ""
	I1004 04:27:05.786983   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.786994   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:05.787001   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:05.787073   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:05.823848   67282 cri.go:89] found id: ""
	I1004 04:27:05.823882   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.823893   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:05.823901   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:05.823970   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:05.866192   67282 cri.go:89] found id: ""
	I1004 04:27:05.866220   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.866238   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:05.866246   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:05.866305   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:05.904051   67282 cri.go:89] found id: ""
	I1004 04:27:05.904078   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.904089   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:05.904096   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:05.904154   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:05.940041   67282 cri.go:89] found id: ""
	I1004 04:27:05.940075   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.940085   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:05.940092   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:05.940158   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:05.975758   67282 cri.go:89] found id: ""
	I1004 04:27:05.975799   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.975810   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:05.975818   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:05.975892   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:06.011044   67282 cri.go:89] found id: ""
	I1004 04:27:06.011086   67282 logs.go:282] 0 containers: []
	W1004 04:27:06.011096   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:06.011105   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:06.011116   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:06.024900   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:06.024937   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:06.109932   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:06.109960   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:06.109976   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:06.189517   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:06.189557   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:06.230019   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:06.230048   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:06.050004   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.548768   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:10.147814   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.646430   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:10.122097   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.123967   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.785355   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:08.799156   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:08.799218   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:08.843606   67282 cri.go:89] found id: ""
	I1004 04:27:08.843634   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.843643   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:08.843648   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:08.843698   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:08.884418   67282 cri.go:89] found id: ""
	I1004 04:27:08.884443   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.884450   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:08.884456   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:08.884503   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:08.925878   67282 cri.go:89] found id: ""
	I1004 04:27:08.925906   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.925914   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:08.925920   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:08.925970   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:08.966127   67282 cri.go:89] found id: ""
	I1004 04:27:08.966157   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.966167   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:08.966173   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:08.966227   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:09.010646   67282 cri.go:89] found id: ""
	I1004 04:27:09.010672   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.010682   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:09.010702   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:09.010769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:09.049738   67282 cri.go:89] found id: ""
	I1004 04:27:09.049761   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.049768   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:09.049774   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:09.049825   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:09.082709   67282 cri.go:89] found id: ""
	I1004 04:27:09.082739   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.082747   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:09.082752   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:09.082808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:09.120574   67282 cri.go:89] found id: ""
	I1004 04:27:09.120605   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.120617   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:09.120626   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:09.120636   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:09.202880   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:09.202922   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:09.242668   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:09.242700   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:09.298662   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:09.298703   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:09.314832   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:09.314868   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:09.389062   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:11.889645   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:11.902953   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:11.903012   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:11.939846   67282 cri.go:89] found id: ""
	I1004 04:27:11.939874   67282 logs.go:282] 0 containers: []
	W1004 04:27:11.939882   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:11.939888   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:11.939936   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:11.975281   67282 cri.go:89] found id: ""
	I1004 04:27:11.975303   67282 logs.go:282] 0 containers: []
	W1004 04:27:11.975311   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:11.975317   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:11.975370   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:12.011400   67282 cri.go:89] found id: ""
	I1004 04:27:12.011428   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.011438   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:12.011443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:12.011506   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:12.046862   67282 cri.go:89] found id: ""
	I1004 04:27:12.046889   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.046898   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:12.046905   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:12.046960   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:12.081537   67282 cri.go:89] found id: ""
	I1004 04:27:12.081569   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.081581   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:12.081590   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:12.081655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:12.121982   67282 cri.go:89] found id: ""
	I1004 04:27:12.122010   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.122021   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:12.122028   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:12.122086   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:12.161419   67282 cri.go:89] found id: ""
	I1004 04:27:12.161460   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.161473   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:12.161481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:12.161549   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:12.202188   67282 cri.go:89] found id: ""
	I1004 04:27:12.202230   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.202242   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:12.202253   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:12.202267   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:12.253424   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:12.253462   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:12.268116   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:12.268141   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:12.337788   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:12.337814   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:12.337826   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:12.417359   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:12.417395   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:10.549097   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.549239   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.647267   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:17.147526   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.623050   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:16.623702   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.959596   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:14.973031   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:14.973090   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:15.011451   67282 cri.go:89] found id: ""
	I1004 04:27:15.011487   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.011497   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:15.011513   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:15.011572   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:15.055767   67282 cri.go:89] found id: ""
	I1004 04:27:15.055817   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.055829   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:15.055836   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:15.055915   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:15.096357   67282 cri.go:89] found id: ""
	I1004 04:27:15.096385   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.096394   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:15.096399   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:15.096456   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:15.131824   67282 cri.go:89] found id: ""
	I1004 04:27:15.131853   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.131863   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:15.131870   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:15.131932   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:15.169250   67282 cri.go:89] found id: ""
	I1004 04:27:15.169285   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.169299   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:15.169307   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:15.169373   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:15.206852   67282 cri.go:89] found id: ""
	I1004 04:27:15.206881   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.206889   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:15.206895   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:15.206949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:15.241392   67282 cri.go:89] found id: ""
	I1004 04:27:15.241421   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.241431   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:15.241439   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:15.241498   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:15.280697   67282 cri.go:89] found id: ""
	I1004 04:27:15.280723   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.280734   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:15.280744   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:15.280758   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:15.361681   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:15.361716   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:15.404640   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:15.404676   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:15.457287   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:15.457326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:15.471162   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:15.471188   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:15.544157   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:18.045094   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:18.060228   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:18.060310   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:18.096659   67282 cri.go:89] found id: ""
	I1004 04:27:18.096688   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.096697   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:18.096703   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:18.096757   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:18.135538   67282 cri.go:89] found id: ""
	I1004 04:27:18.135565   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.135573   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:18.135579   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:18.135629   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:18.171051   67282 cri.go:89] found id: ""
	I1004 04:27:18.171082   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.171098   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:18.171106   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:18.171168   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:18.205696   67282 cri.go:89] found id: ""
	I1004 04:27:18.205725   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.205735   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:18.205742   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:18.205803   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:18.240545   67282 cri.go:89] found id: ""
	I1004 04:27:18.240566   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.240576   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:18.240584   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:18.240638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:18.279185   67282 cri.go:89] found id: ""
	I1004 04:27:18.279221   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.279232   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:18.279239   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:18.279310   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:18.318395   67282 cri.go:89] found id: ""
	I1004 04:27:18.318417   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.318424   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:18.318430   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:18.318476   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:18.352367   67282 cri.go:89] found id: ""
	I1004 04:27:18.352390   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.352398   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:18.352407   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:18.352420   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:18.365604   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:18.365637   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:18.438407   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:18.438427   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:18.438438   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:14.549690   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:16.550244   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:18.550355   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:19.647031   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:22.147826   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:19.126090   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:21.623910   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:18.513645   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:18.513679   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:18.557224   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:18.557250   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:21.111005   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:21.126573   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:21.126631   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:21.161161   67282 cri.go:89] found id: ""
	I1004 04:27:21.161190   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.161201   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:21.161207   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:21.161258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:21.199517   67282 cri.go:89] found id: ""
	I1004 04:27:21.199544   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.199555   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:21.199562   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:21.199625   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:21.236210   67282 cri.go:89] found id: ""
	I1004 04:27:21.236238   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.236246   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:21.236251   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:21.236311   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:21.272720   67282 cri.go:89] found id: ""
	I1004 04:27:21.272746   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.272753   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:21.272759   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:21.272808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:21.311439   67282 cri.go:89] found id: ""
	I1004 04:27:21.311474   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.311484   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:21.311491   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:21.311551   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:21.360400   67282 cri.go:89] found id: ""
	I1004 04:27:21.360427   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.360436   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:21.360443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:21.360511   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:21.394627   67282 cri.go:89] found id: ""
	I1004 04:27:21.394656   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.394667   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:21.394673   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:21.394721   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:21.429736   67282 cri.go:89] found id: ""
	I1004 04:27:21.429762   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.429770   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:21.429778   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:21.429789   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:21.482773   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:21.482808   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:21.497570   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:21.497595   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:21.582335   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:21.582355   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:21.582367   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:21.662196   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:21.662230   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:21.050000   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:23.050516   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.647074   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:27.147999   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.123142   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:26.624049   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.205743   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:24.222878   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:24.222951   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:24.263410   67282 cri.go:89] found id: ""
	I1004 04:27:24.263450   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.263462   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:24.263469   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:24.263532   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:24.306892   67282 cri.go:89] found id: ""
	I1004 04:27:24.306923   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.306934   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:24.306941   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:24.307008   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:24.345522   67282 cri.go:89] found id: ""
	I1004 04:27:24.345559   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.345571   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:24.345579   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:24.345638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:24.384893   67282 cri.go:89] found id: ""
	I1004 04:27:24.384918   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.384925   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:24.384931   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:24.384978   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:24.420998   67282 cri.go:89] found id: ""
	I1004 04:27:24.421025   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.421036   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:24.421043   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:24.421105   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:24.456277   67282 cri.go:89] found id: ""
	I1004 04:27:24.456305   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.456315   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:24.456322   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:24.456383   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:24.497852   67282 cri.go:89] found id: ""
	I1004 04:27:24.497881   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.497892   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:24.497900   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:24.497960   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:24.538702   67282 cri.go:89] found id: ""
	I1004 04:27:24.538736   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.538755   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:24.538766   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:24.538778   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:24.553747   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:24.553773   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:24.638059   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:24.638081   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:24.638093   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:24.718165   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:24.718212   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:24.759770   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:24.759811   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:27.311684   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:27.327493   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:27.327570   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:27.362804   67282 cri.go:89] found id: ""
	I1004 04:27:27.362827   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.362836   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:27.362841   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:27.362888   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:27.401576   67282 cri.go:89] found id: ""
	I1004 04:27:27.401604   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.401614   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:27.401621   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:27.401682   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:27.445152   67282 cri.go:89] found id: ""
	I1004 04:27:27.445177   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.445187   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:27.445193   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:27.445240   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:27.482710   67282 cri.go:89] found id: ""
	I1004 04:27:27.482734   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.482742   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:27.482749   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:27.482808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:27.519459   67282 cri.go:89] found id: ""
	I1004 04:27:27.519488   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.519498   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:27.519505   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:27.519569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:27.559381   67282 cri.go:89] found id: ""
	I1004 04:27:27.559407   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.559417   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:27.559423   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:27.559468   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:27.609040   67282 cri.go:89] found id: ""
	I1004 04:27:27.609068   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.609076   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:27.609081   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:27.609128   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:27.654537   67282 cri.go:89] found id: ""
	I1004 04:27:27.654569   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.654579   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:27.654590   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:27.654603   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:27.709062   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:27.709098   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:27.722931   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:27.722955   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:27.796863   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:27.796884   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:27.796895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:27.879840   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:27.879876   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:25.549643   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:27.551373   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:29.646879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:31.646956   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:29.122087   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:31.122774   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:30.423644   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:30.439256   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:30.439311   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:30.479612   67282 cri.go:89] found id: ""
	I1004 04:27:30.479640   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.479648   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:30.479654   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:30.479750   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:30.522846   67282 cri.go:89] found id: ""
	I1004 04:27:30.522879   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.522890   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:30.522898   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:30.522946   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:30.558935   67282 cri.go:89] found id: ""
	I1004 04:27:30.558962   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.558971   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:30.558976   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:30.559032   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:30.603383   67282 cri.go:89] found id: ""
	I1004 04:27:30.603411   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.603421   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:30.603428   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:30.603492   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:30.644700   67282 cri.go:89] found id: ""
	I1004 04:27:30.644727   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.644737   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:30.644744   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:30.644799   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:30.680328   67282 cri.go:89] found id: ""
	I1004 04:27:30.680358   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.680367   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:30.680372   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:30.680419   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:30.717973   67282 cri.go:89] found id: ""
	I1004 04:27:30.717995   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.718005   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:30.718021   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:30.718082   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:30.755838   67282 cri.go:89] found id: ""
	I1004 04:27:30.755866   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.755874   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:30.755882   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:30.755893   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:30.809999   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:30.810036   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:30.824447   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:30.824491   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:30.902008   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:30.902030   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:30.902043   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:30.986938   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:30.986984   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:30.049983   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:32.050033   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:34.050671   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.647707   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:36.146619   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.624575   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:36.122046   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.531108   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:33.546681   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:33.546759   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:33.586444   67282 cri.go:89] found id: ""
	I1004 04:27:33.586469   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.586479   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:33.586486   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:33.586552   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:33.629340   67282 cri.go:89] found id: ""
	I1004 04:27:33.629365   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.629373   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:33.629378   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:33.629429   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:33.668446   67282 cri.go:89] found id: ""
	I1004 04:27:33.668473   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.668483   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:33.668490   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:33.668548   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:33.706287   67282 cri.go:89] found id: ""
	I1004 04:27:33.706312   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.706320   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:33.706327   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:33.706385   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:33.746161   67282 cri.go:89] found id: ""
	I1004 04:27:33.746189   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.746200   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:33.746207   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:33.746270   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:33.782157   67282 cri.go:89] found id: ""
	I1004 04:27:33.782184   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.782194   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:33.782200   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:33.782262   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:33.820332   67282 cri.go:89] found id: ""
	I1004 04:27:33.820361   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.820371   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:33.820378   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:33.820437   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:33.859431   67282 cri.go:89] found id: ""
	I1004 04:27:33.859458   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.859467   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:33.859475   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:33.859485   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:33.910259   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:33.910292   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:33.925149   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:33.925177   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:34.006153   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:34.006187   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:34.006202   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:34.115882   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:34.115916   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:36.662964   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:36.677071   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:36.677139   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:36.720785   67282 cri.go:89] found id: ""
	I1004 04:27:36.720807   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.720818   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:36.720826   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:36.720875   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:36.757535   67282 cri.go:89] found id: ""
	I1004 04:27:36.757563   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.757574   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:36.757582   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:36.757630   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:36.800989   67282 cri.go:89] found id: ""
	I1004 04:27:36.801024   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.801038   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:36.801046   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:36.801112   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:36.837101   67282 cri.go:89] found id: ""
	I1004 04:27:36.837122   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.837131   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:36.837136   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:36.837181   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:36.876325   67282 cri.go:89] found id: ""
	I1004 04:27:36.876358   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.876370   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:36.876379   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:36.876444   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:36.914720   67282 cri.go:89] found id: ""
	I1004 04:27:36.914749   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.914759   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:36.914767   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:36.914828   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:36.949672   67282 cri.go:89] found id: ""
	I1004 04:27:36.949694   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.949701   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:36.949706   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:36.949754   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:36.983374   67282 cri.go:89] found id: ""
	I1004 04:27:36.983406   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.983416   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:36.983427   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:36.983440   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:37.039040   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:37.039075   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:37.054873   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:37.054898   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:37.131537   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:37.131562   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:37.131577   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:37.213958   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:37.213990   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:36.548751   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:39.049804   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:38.646028   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:40.646213   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:42.648505   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:38.623560   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:40.623721   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:43.122033   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:39.754264   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:39.771465   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:39.771545   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:39.829530   67282 cri.go:89] found id: ""
	I1004 04:27:39.829560   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.829572   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:39.829580   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:39.829639   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:39.876055   67282 cri.go:89] found id: ""
	I1004 04:27:39.876078   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.876090   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:39.876095   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:39.876142   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:39.913304   67282 cri.go:89] found id: ""
	I1004 04:27:39.913327   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.913335   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:39.913340   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:39.913389   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:39.948821   67282 cri.go:89] found id: ""
	I1004 04:27:39.948847   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.948855   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:39.948862   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:39.948916   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:39.986994   67282 cri.go:89] found id: ""
	I1004 04:27:39.987023   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.987034   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:39.987041   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:39.987141   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:40.026627   67282 cri.go:89] found id: ""
	I1004 04:27:40.026656   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.026668   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:40.026675   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:40.026734   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:40.067028   67282 cri.go:89] found id: ""
	I1004 04:27:40.067068   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.067079   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:40.067086   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:40.067144   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:40.105638   67282 cri.go:89] found id: ""
	I1004 04:27:40.105667   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.105677   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:40.105694   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:40.105707   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:40.159425   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:40.159467   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:40.175045   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:40.175073   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:40.261967   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:40.261989   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:40.262002   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:40.345317   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:40.345354   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:42.888115   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:42.901889   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:42.901948   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:42.938556   67282 cri.go:89] found id: ""
	I1004 04:27:42.938587   67282 logs.go:282] 0 containers: []
	W1004 04:27:42.938597   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:42.938604   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:42.938668   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:42.974569   67282 cri.go:89] found id: ""
	I1004 04:27:42.974595   67282 logs.go:282] 0 containers: []
	W1004 04:27:42.974606   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:42.974613   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:42.974679   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:43.010552   67282 cri.go:89] found id: ""
	I1004 04:27:43.010581   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.010593   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:43.010600   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:43.010655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:43.046204   67282 cri.go:89] found id: ""
	I1004 04:27:43.046237   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.046247   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:43.046254   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:43.046313   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:43.081612   67282 cri.go:89] found id: ""
	I1004 04:27:43.081644   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.081655   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:43.081662   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:43.081729   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:43.121103   67282 cri.go:89] found id: ""
	I1004 04:27:43.121126   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.121133   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:43.121139   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:43.121191   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:43.157104   67282 cri.go:89] found id: ""
	I1004 04:27:43.157128   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.157136   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:43.157141   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:43.157196   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:43.198927   67282 cri.go:89] found id: ""
	I1004 04:27:43.198951   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.198958   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:43.198966   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:43.198975   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:43.254534   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:43.254563   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:43.268106   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:43.268130   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:43.344382   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:43.344410   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:43.344425   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:43.426916   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:43.426948   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:41.549364   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:43.549590   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.146452   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:47.148300   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.126135   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:47.622568   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.966806   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:45.980187   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:45.980252   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:46.014196   67282 cri.go:89] found id: ""
	I1004 04:27:46.014220   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.014228   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:46.014233   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:46.014295   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:46.053910   67282 cri.go:89] found id: ""
	I1004 04:27:46.053940   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.053951   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:46.053957   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:46.054013   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:46.087896   67282 cri.go:89] found id: ""
	I1004 04:27:46.087921   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.087930   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:46.087936   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:46.087985   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:46.123441   67282 cri.go:89] found id: ""
	I1004 04:27:46.123465   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.123475   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:46.123481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:46.123545   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:46.159664   67282 cri.go:89] found id: ""
	I1004 04:27:46.159688   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.159698   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:46.159704   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:46.159761   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:46.195474   67282 cri.go:89] found id: ""
	I1004 04:27:46.195501   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.195512   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:46.195525   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:46.195569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:46.228670   67282 cri.go:89] found id: ""
	I1004 04:27:46.228693   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.228701   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:46.228706   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:46.228759   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:46.265278   67282 cri.go:89] found id: ""
	I1004 04:27:46.265303   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.265311   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:46.265325   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:46.265338   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:46.315135   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:46.315163   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:46.327765   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:46.327797   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:46.393157   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:46.393173   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:46.393184   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:46.473026   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:46.473058   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:46.049285   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:48.549053   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:49.647027   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:52.146841   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:50.122921   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:52.622913   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:49.011972   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:49.025718   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:49.025783   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:49.062749   67282 cri.go:89] found id: ""
	I1004 04:27:49.062774   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.062782   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:49.062788   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:49.062844   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:49.100838   67282 cri.go:89] found id: ""
	I1004 04:27:49.100886   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.100897   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:49.100904   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:49.100961   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:49.139966   67282 cri.go:89] found id: ""
	I1004 04:27:49.139990   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.140000   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:49.140007   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:49.140088   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:49.179347   67282 cri.go:89] found id: ""
	I1004 04:27:49.179373   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.179384   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:49.179391   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:49.179435   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:49.218086   67282 cri.go:89] found id: ""
	I1004 04:27:49.218112   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.218121   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:49.218127   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:49.218181   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:49.254779   67282 cri.go:89] found id: ""
	I1004 04:27:49.254811   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.254823   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:49.254830   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:49.254888   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:49.287351   67282 cri.go:89] found id: ""
	I1004 04:27:49.287381   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.287392   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:49.287398   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:49.287456   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:49.320051   67282 cri.go:89] found id: ""
	I1004 04:27:49.320078   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.320089   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:49.320100   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:49.320112   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:49.371270   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:49.371300   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:49.384403   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:49.384432   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:49.468132   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:49.468154   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:49.468167   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:49.543179   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:49.543211   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:52.093235   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:52.108446   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:52.108520   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:52.147590   67282 cri.go:89] found id: ""
	I1004 04:27:52.147613   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.147620   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:52.147626   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:52.147677   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:52.183066   67282 cri.go:89] found id: ""
	I1004 04:27:52.183095   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.183105   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:52.183112   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:52.183170   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:52.223109   67282 cri.go:89] found id: ""
	I1004 04:27:52.223140   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.223154   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:52.223165   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:52.223223   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:52.259547   67282 cri.go:89] found id: ""
	I1004 04:27:52.259573   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.259582   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:52.259587   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:52.259638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:52.296934   67282 cri.go:89] found id: ""
	I1004 04:27:52.296961   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.296971   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:52.296979   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:52.297040   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:52.331650   67282 cri.go:89] found id: ""
	I1004 04:27:52.331671   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.331679   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:52.331684   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:52.331728   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:52.365111   67282 cri.go:89] found id: ""
	I1004 04:27:52.365139   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.365150   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:52.365157   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:52.365239   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:52.400974   67282 cri.go:89] found id: ""
	I1004 04:27:52.401010   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.401023   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:52.401035   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:52.401049   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:52.484732   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:52.484771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:52.523322   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:52.523348   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:52.576671   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:52.576702   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:52.590263   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:52.590291   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:52.666646   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:50.549475   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:53.049259   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:54.646262   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.153196   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:55.123174   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.123932   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:55.166856   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:55.181481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:55.181562   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:55.218023   67282 cri.go:89] found id: ""
	I1004 04:27:55.218048   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.218056   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:55.218063   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:55.218121   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:55.256439   67282 cri.go:89] found id: ""
	I1004 04:27:55.256464   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.256472   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:55.256477   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:55.256531   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:55.294563   67282 cri.go:89] found id: ""
	I1004 04:27:55.294588   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.294596   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:55.294601   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:55.294656   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:55.331266   67282 cri.go:89] found id: ""
	I1004 04:27:55.331290   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.331300   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:55.331306   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:55.331370   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:55.367286   67282 cri.go:89] found id: ""
	I1004 04:27:55.367314   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.367325   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:55.367332   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:55.367391   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:55.402031   67282 cri.go:89] found id: ""
	I1004 04:27:55.402054   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.402062   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:55.402068   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:55.402122   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:55.437737   67282 cri.go:89] found id: ""
	I1004 04:27:55.437764   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.437774   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:55.437780   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:55.437842   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:55.470654   67282 cri.go:89] found id: ""
	I1004 04:27:55.470692   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.470704   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:55.470713   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:55.470726   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:55.521364   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:55.521393   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:55.534691   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:55.534716   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:55.600902   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:55.600923   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:55.600933   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:55.678896   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:55.678940   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:58.220086   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:58.234049   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:58.234110   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:58.281112   67282 cri.go:89] found id: ""
	I1004 04:27:58.281135   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.281143   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:58.281148   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:58.281191   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:58.320549   67282 cri.go:89] found id: ""
	I1004 04:27:58.320575   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.320584   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:58.320589   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:58.320635   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:58.355139   67282 cri.go:89] found id: ""
	I1004 04:27:58.355166   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.355174   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:58.355179   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:58.355225   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:58.387809   67282 cri.go:89] found id: ""
	I1004 04:27:58.387836   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.387846   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:58.387851   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:58.387908   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:58.420264   67282 cri.go:89] found id: ""
	I1004 04:27:58.420287   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.420295   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:58.420300   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:58.420349   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:58.455409   67282 cri.go:89] found id: ""
	I1004 04:27:58.455431   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.455438   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:58.455443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:58.455487   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:58.488708   67282 cri.go:89] found id: ""
	I1004 04:27:58.488734   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.488742   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:58.488749   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:58.488797   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:55.051622   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.548584   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:59.646699   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:01.648277   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:59.623008   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:02.122303   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:58.522139   67282 cri.go:89] found id: ""
	I1004 04:27:58.522161   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.522169   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:58.522176   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:58.522187   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:58.604653   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:58.604683   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:58.645141   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:58.645169   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:58.699716   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:58.699748   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:58.713197   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:58.713228   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:58.781998   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:01.282429   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:01.297266   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:01.297343   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:01.330421   67282 cri.go:89] found id: ""
	I1004 04:28:01.330446   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.330454   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:28:01.330459   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:01.330514   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:01.366960   67282 cri.go:89] found id: ""
	I1004 04:28:01.366983   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.366992   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:28:01.366998   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:01.367067   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:01.400886   67282 cri.go:89] found id: ""
	I1004 04:28:01.400910   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.400920   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:28:01.400931   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:01.400987   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:01.435556   67282 cri.go:89] found id: ""
	I1004 04:28:01.435586   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.435594   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:28:01.435601   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:01.435649   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:01.475772   67282 cri.go:89] found id: ""
	I1004 04:28:01.475810   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.475820   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:28:01.475826   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:01.475884   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:01.512380   67282 cri.go:89] found id: ""
	I1004 04:28:01.512403   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.512411   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:28:01.512417   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:01.512465   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:01.550488   67282 cri.go:89] found id: ""
	I1004 04:28:01.550517   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.550528   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:01.550536   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:28:01.550595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:28:01.586216   67282 cri.go:89] found id: ""
	I1004 04:28:01.586249   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.586261   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:28:01.586271   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:01.586285   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:01.640819   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:01.640860   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:01.656990   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:01.657020   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:28:01.731326   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:01.731354   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:01.731368   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:01.810007   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:28:01.810044   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:59.548748   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:01.551035   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.043116   66755 pod_ready.go:82] duration metric: took 4m0.000354814s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" ...
	E1004 04:28:04.043143   66755 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" (will not retry!)
	I1004 04:28:04.043167   66755 pod_ready.go:39] duration metric: took 4m15.403862245s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:04.043219   66755 kubeadm.go:597] duration metric: took 4m23.226496183s to restartPrimaryControlPlane
	W1004 04:28:04.043288   66755 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1004 04:28:04.043316   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:28:04.146697   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:06.147038   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:08.147201   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.122463   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:06.622379   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.352648   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:04.366150   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:04.366227   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:04.403272   67282 cri.go:89] found id: ""
	I1004 04:28:04.403298   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.403308   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:28:04.403315   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:04.403371   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:04.439237   67282 cri.go:89] found id: ""
	I1004 04:28:04.439269   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.439280   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:28:04.439287   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:04.439345   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:04.475532   67282 cri.go:89] found id: ""
	I1004 04:28:04.475558   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.475569   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:28:04.475576   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:04.475638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:04.511738   67282 cri.go:89] found id: ""
	I1004 04:28:04.511765   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.511775   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:28:04.511792   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:04.511850   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:04.553536   67282 cri.go:89] found id: ""
	I1004 04:28:04.553561   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.553568   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:28:04.553574   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:04.553625   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:04.589016   67282 cri.go:89] found id: ""
	I1004 04:28:04.589044   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.589053   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:28:04.589058   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:04.589106   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:04.622780   67282 cri.go:89] found id: ""
	I1004 04:28:04.622808   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.622817   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:04.622823   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:28:04.622879   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:28:04.662620   67282 cri.go:89] found id: ""
	I1004 04:28:04.662641   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.662649   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:28:04.662659   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:04.662669   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:04.717894   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:04.717928   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:04.732353   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:04.732385   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:28:04.806443   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:04.806469   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:04.806492   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:04.887684   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:28:04.887717   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:07.426630   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:07.440242   67282 kubeadm.go:597] duration metric: took 4m3.475062199s to restartPrimaryControlPlane
	W1004 04:28:07.440318   67282 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1004 04:28:07.440346   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:28:08.147532   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:08.162175   67282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:28:08.172013   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:28:08.181741   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:28:08.181757   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:28:08.181801   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:28:08.191002   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:28:08.191046   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:28:08.200929   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:28:08.210241   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:28:08.210286   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:28:08.219693   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:28:08.229497   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:28:08.229534   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:28:08.239583   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:28:08.249207   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:28:08.249252   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:28:08.258516   67282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:28:08.328054   67282 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:28:08.328132   67282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:28:08.472265   67282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:28:08.472420   67282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:28:08.472543   67282 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:28:08.655873   67282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:28:08.657726   67282 out.go:235]   - Generating certificates and keys ...
	I1004 04:28:08.657817   67282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:28:08.657876   67282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:28:08.657942   67282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:28:08.658034   67282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:28:08.658149   67282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:28:08.658235   67282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:28:08.658309   67282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:28:08.658396   67282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:28:08.658503   67282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:28:08.658600   67282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:28:08.658651   67282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:28:08.658707   67282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:28:08.706486   67282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:28:08.909036   67282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:28:09.285968   67282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:28:09.499963   67282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:28:09.516914   67282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:28:09.517832   67282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:28:09.517900   67282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:28:09.664925   67282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:28:10.147391   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:12.646012   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:09.121686   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:11.123086   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:13.123578   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:09.666691   67282 out.go:235]   - Booting up control plane ...
	I1004 04:28:09.666889   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:28:09.671298   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:28:09.672046   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:28:09.672956   67282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:28:09.685069   67282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:28:14.646614   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:16.646683   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:15.125374   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:17.125685   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:18.646777   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:21.147299   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:19.623872   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:22.123077   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:23.646460   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:25.647096   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:28.147324   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:24.623730   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:27.123516   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:30.379460   66755 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.336110507s)
	I1004 04:28:30.379544   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:30.395622   66755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:28:30.406790   66755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:28:30.417380   66755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:28:30.417408   66755 kubeadm.go:157] found existing configuration files:
	
	I1004 04:28:30.417458   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:28:30.427925   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:28:30.427993   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:28:30.438694   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:28:30.448898   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:28:30.448972   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:28:30.459463   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:28:30.469227   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:28:30.469281   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:28:30.479979   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:28:30.489873   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:28:30.489936   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:28:30.499999   66755 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:28:30.549707   66755 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 04:28:30.549771   66755 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:28:30.663468   66755 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:28:30.663595   66755 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:28:30.663698   66755 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 04:28:30.675750   66755 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:28:30.677655   66755 out.go:235]   - Generating certificates and keys ...
	I1004 04:28:30.677760   66755 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:28:30.677868   66755 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:28:30.678010   66755 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:28:30.678102   66755 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:28:30.678217   66755 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:28:30.678289   66755 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:28:30.678378   66755 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:28:30.678470   66755 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:28:30.678566   66755 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:28:30.678732   66755 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:28:30.679295   66755 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:28:30.679383   66755 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:28:30.826979   66755 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:28:30.900919   66755 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 04:28:31.098221   66755 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:28:31.243668   66755 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:28:31.411766   66755 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:28:31.412181   66755 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:28:31.414652   66755 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:28:30.646927   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:32.647767   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:29.129148   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:31.623284   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:31.416504   66755 out.go:235]   - Booting up control plane ...
	I1004 04:28:31.416620   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:28:31.416730   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:28:31.418284   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:28:31.437379   66755 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:28:31.443450   66755 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:28:31.443505   66755 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:28:31.586540   66755 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 04:28:31.586706   66755 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 04:28:32.088382   66755 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.195244ms
	I1004 04:28:32.088510   66755 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 04:28:37.090291   66755 kubeadm.go:310] [api-check] The API server is healthy after 5.001756025s
	I1004 04:28:37.103845   66755 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 04:28:37.127230   66755 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 04:28:37.156917   66755 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 04:28:37.157181   66755 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-934812 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 04:28:37.171399   66755 kubeadm.go:310] [bootstrap-token] Using token: 1wt5ey.lvccf2aeyngf9mt3
	I1004 04:28:34.648249   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:37.148680   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:33.623901   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:36.122762   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:38.123147   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:37.172939   66755 out.go:235]   - Configuring RBAC rules ...
	I1004 04:28:37.173086   66755 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 04:28:37.179454   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 04:28:37.188765   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 04:28:37.192599   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 04:28:37.200359   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 04:28:37.204872   66755 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 04:28:37.498753   66755 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 04:28:37.931621   66755 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 04:28:38.497855   66755 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 04:28:38.498949   66755 kubeadm.go:310] 
	I1004 04:28:38.499023   66755 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 04:28:38.499055   66755 kubeadm.go:310] 
	I1004 04:28:38.499183   66755 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 04:28:38.499195   66755 kubeadm.go:310] 
	I1004 04:28:38.499229   66755 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 04:28:38.499316   66755 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 04:28:38.499385   66755 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 04:28:38.499393   66755 kubeadm.go:310] 
	I1004 04:28:38.499481   66755 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 04:28:38.499498   66755 kubeadm.go:310] 
	I1004 04:28:38.499563   66755 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 04:28:38.499571   66755 kubeadm.go:310] 
	I1004 04:28:38.499653   66755 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 04:28:38.499742   66755 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 04:28:38.499871   66755 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 04:28:38.499888   66755 kubeadm.go:310] 
	I1004 04:28:38.499994   66755 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 04:28:38.500104   66755 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 04:28:38.500115   66755 kubeadm.go:310] 
	I1004 04:28:38.500220   66755 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1wt5ey.lvccf2aeyngf9mt3 \
	I1004 04:28:38.500350   66755 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 \
	I1004 04:28:38.500387   66755 kubeadm.go:310] 	--control-plane 
	I1004 04:28:38.500402   66755 kubeadm.go:310] 
	I1004 04:28:38.500478   66755 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 04:28:38.500484   66755 kubeadm.go:310] 
	I1004 04:28:38.500563   66755 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1wt5ey.lvccf2aeyngf9mt3 \
	I1004 04:28:38.500686   66755 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 
	I1004 04:28:38.501820   66755 kubeadm.go:310] W1004 04:28:30.522396    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:28:38.502147   66755 kubeadm.go:310] W1004 04:28:30.524006    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:28:38.502282   66755 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:28:38.502311   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:28:38.502321   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:28:38.504185   66755 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:28:38.505600   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:28:38.518746   66755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:28:38.541311   66755 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:28:38.541422   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:38.541460   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-934812 minikube.k8s.io/updated_at=2024_10_04T04_28_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=embed-certs-934812 minikube.k8s.io/primary=true
	I1004 04:28:38.605537   66755 ops.go:34] apiserver oom_adj: -16
	I1004 04:28:38.765084   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:39.646916   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:41.651456   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:39.265365   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:39.765925   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:40.265135   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:40.766204   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:41.265734   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:41.765404   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:42.265993   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:42.765826   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:43.265776   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:43.353243   66755 kubeadm.go:1113] duration metric: took 4.811892444s to wait for elevateKubeSystemPrivileges
	I1004 04:28:43.353288   66755 kubeadm.go:394] duration metric: took 5m2.586827656s to StartCluster
	I1004 04:28:43.353313   66755 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:28:43.353402   66755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:28:43.355058   66755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:28:43.355309   66755 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:28:43.355388   66755 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:28:43.355533   66755 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-934812"
	I1004 04:28:43.355542   66755 addons.go:69] Setting default-storageclass=true in profile "embed-certs-934812"
	I1004 04:28:43.355556   66755 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-934812"
	I1004 04:28:43.355563   66755 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-934812"
	W1004 04:28:43.355568   66755 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:28:43.355584   66755 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:28:43.355598   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.355639   66755 addons.go:69] Setting metrics-server=true in profile "embed-certs-934812"
	I1004 04:28:43.355658   66755 addons.go:234] Setting addon metrics-server=true in "embed-certs-934812"
	W1004 04:28:43.355666   66755 addons.go:243] addon metrics-server should already be in state true
	I1004 04:28:43.355694   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.356024   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356075   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356095   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.356108   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.356075   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356173   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.357087   66755 out.go:177] * Verifying Kubernetes components...
	I1004 04:28:43.358428   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:28:43.373646   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45715
	I1004 04:28:43.373874   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I1004 04:28:43.374344   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.374344   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.374927   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.374948   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.375003   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.375027   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.375285   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.375342   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.375499   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.375884   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.375928   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.376269   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I1004 04:28:43.376636   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.377073   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.377099   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.377455   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.377883   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.377918   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.378402   66755 addons.go:234] Setting addon default-storageclass=true in "embed-certs-934812"
	W1004 04:28:43.378420   66755 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:28:43.378447   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.378705   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.378734   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.394001   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I1004 04:28:43.394289   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I1004 04:28:43.394645   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.394760   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.395195   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.395213   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.395302   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.395317   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.395596   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.395626   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.395842   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.396120   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.396160   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.397590   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.399391   66755 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:28:43.400581   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:28:43.400598   66755 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:28:43.400619   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.405134   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.405778   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I1004 04:28:43.405968   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.405996   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.406230   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.406383   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.406428   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.406571   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.406698   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.406825   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.406847   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.407455   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.407600   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.409278   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.411006   66755 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:28:40.622426   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:42.623400   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:43.412106   66755 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:28:43.412124   66755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:28:43.412389   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.414167   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I1004 04:28:43.414796   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.415285   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.415309   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.415657   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.415710   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.415911   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.416195   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.416217   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.416440   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.416628   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.416759   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.416856   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.418235   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.418426   66755 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:28:43.418436   66755 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:28:43.418456   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.421305   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.421761   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.421779   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.421966   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.422654   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.422789   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.422877   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.580648   66755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:28:43.615728   66755 node_ready.go:35] waiting up to 6m0s for node "embed-certs-934812" to be "Ready" ...
	I1004 04:28:43.625558   66755 node_ready.go:49] node "embed-certs-934812" has status "Ready":"True"
	I1004 04:28:43.625600   66755 node_ready.go:38] duration metric: took 9.827384ms for node "embed-certs-934812" to be "Ready" ...
	I1004 04:28:43.625612   66755 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:43.634425   66755 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:43.748926   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:28:43.774727   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:28:43.781558   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:28:43.781589   66755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:28:43.838039   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:28:43.838067   66755 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:28:43.945364   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:28:43.945392   66755 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:28:44.005000   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:28:44.253491   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.253521   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.253828   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.253896   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.253910   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.253925   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.253938   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.254130   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.254149   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.254164   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.267367   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.267396   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.267680   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.267700   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.864663   66755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089890385s)
	I1004 04:28:44.864722   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.864734   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.865046   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.865070   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.865086   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.865095   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.866872   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.866877   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.866907   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.138868   66755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133828074s)
	I1004 04:28:45.138926   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:45.138942   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:45.139243   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:45.139265   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.139276   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:45.139283   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:45.139484   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:45.139497   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.139507   66755 addons.go:475] Verifying addon metrics-server=true in "embed-certs-934812"
	I1004 04:28:45.141046   66755 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 04:28:44.147013   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:44.648117   67541 pod_ready.go:82] duration metric: took 4m0.007930603s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	E1004 04:28:44.648144   67541 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 04:28:44.648154   67541 pod_ready.go:39] duration metric: took 4m7.419382357s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:44.648170   67541 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:28:44.648200   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:44.648256   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:44.712473   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:44.712500   67541 cri.go:89] found id: ""
	I1004 04:28:44.712510   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:44.712568   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.717619   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:44.717688   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:44.760036   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:44.760061   67541 cri.go:89] found id: ""
	I1004 04:28:44.760071   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:44.760124   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.766402   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:44.766465   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:44.821766   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:44.821792   67541 cri.go:89] found id: ""
	I1004 04:28:44.821801   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:44.821858   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.826315   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:44.826370   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:44.873526   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:44.873547   67541 cri.go:89] found id: ""
	I1004 04:28:44.873556   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:44.873615   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.878375   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:44.878442   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:44.920240   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:44.920261   67541 cri.go:89] found id: ""
	I1004 04:28:44.920270   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:44.920322   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.925102   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:44.925158   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:44.967386   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:44.967406   67541 cri.go:89] found id: ""
	I1004 04:28:44.967416   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:44.967471   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.971979   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:44.972056   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:45.009842   67541 cri.go:89] found id: ""
	I1004 04:28:45.009869   67541 logs.go:282] 0 containers: []
	W1004 04:28:45.009881   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:45.009890   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:45.009952   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:45.055166   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:45.055189   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:45.055194   67541 cri.go:89] found id: ""
	I1004 04:28:45.055201   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:45.055258   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:45.060362   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:45.066118   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:45.066351   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:45.128185   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:45.128221   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:45.270042   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:45.270084   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:45.309065   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:45.309093   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:45.352299   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:45.352327   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:45.401846   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:45.401882   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:45.447474   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:45.447530   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:45.500734   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:45.500765   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:46.040224   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:46.040275   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:46.112675   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:46.112716   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:46.128530   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:46.128553   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:46.175007   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:46.175039   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:46.222706   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:46.222738   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:44.623804   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:47.122548   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:45.142166   66755 addons.go:510] duration metric: took 1.786788452s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 04:28:45.642731   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:46.641705   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.641730   66755 pod_ready.go:82] duration metric: took 3.007270041s for pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.641743   66755 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.646744   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.646767   66755 pod_ready.go:82] duration metric: took 5.01485ms for pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.646777   66755 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.652554   66755 pod_ready.go:93] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.652572   66755 pod_ready.go:82] duration metric: took 5.78883ms for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.652580   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:48.659404   66755 pod_ready.go:103] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:51.158765   66755 pod_ready.go:93] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.158787   66755 pod_ready.go:82] duration metric: took 4.506200726s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.158796   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.162949   66755 pod_ready.go:93] pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.162967   66755 pod_ready.go:82] duration metric: took 4.16468ms for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.162975   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9czbc" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.167309   66755 pod_ready.go:93] pod "kube-proxy-9czbc" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.167327   66755 pod_ready.go:82] duration metric: took 4.347415ms for pod "kube-proxy-9czbc" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.167334   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.171048   66755 pod_ready.go:93] pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.171065   66755 pod_ready.go:82] duration metric: took 3.724785ms for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.171071   66755 pod_ready.go:39] duration metric: took 7.545445402s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:51.171083   66755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:28:51.171126   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:51.186751   66755 api_server.go:72] duration metric: took 7.831380288s to wait for apiserver process to appear ...
	I1004 04:28:51.186782   66755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:28:51.186799   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:28:51.192753   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 200:
	ok
	I1004 04:28:51.194259   66755 api_server.go:141] control plane version: v1.31.1
	I1004 04:28:51.194284   66755 api_server.go:131] duration metric: took 7.491456ms to wait for apiserver health ...
	I1004 04:28:51.194292   66755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:28:51.241469   66755 system_pods.go:59] 9 kube-system pods found
	I1004 04:28:51.241491   66755 system_pods.go:61] "coredns-7c65d6cfc9-h5tbr" [87deb61f-2ce4-4d45-91da-c16557b5ef75] Running
	I1004 04:28:51.241496   66755 system_pods.go:61] "coredns-7c65d6cfc9-p52s6" [b9b3cd7f-f28e-4502-a55d-7792cfa5a6fe] Running
	I1004 04:28:51.241500   66755 system_pods.go:61] "etcd-embed-certs-934812" [487917e4-1b38-4b84-baf9-eacc1113718e] Running
	I1004 04:28:51.241503   66755 system_pods.go:61] "kube-apiserver-embed-certs-934812" [7fcfc483-3c53-415d-8329-5cae1ecd022f] Running
	I1004 04:28:51.241507   66755 system_pods.go:61] "kube-controller-manager-embed-certs-934812" [9ab25b16-916b-49f7-b34d-a12cf8552769] Running
	I1004 04:28:51.241514   66755 system_pods.go:61] "kube-proxy-9czbc" [dedff5a2-62b6-49c3-8369-9182d1c5bf7a] Running
	I1004 04:28:51.241517   66755 system_pods.go:61] "kube-scheduler-embed-certs-934812" [88a5acd5-108f-482a-ac24-7b75a30428ff] Running
	I1004 04:28:51.241525   66755 system_pods.go:61] "metrics-server-6867b74b74-fh2lk" [12e3e884-2ad3-4eaa-a505-822717e5bc8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:51.241528   66755 system_pods.go:61] "storage-provisioner" [67b4ef22-068c-4d14-840e-deab91c5ab94] Running
	I1004 04:28:51.241534   66755 system_pods.go:74] duration metric: took 47.237476ms to wait for pod list to return data ...
	I1004 04:28:51.241541   66755 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:28:51.438932   66755 default_sa.go:45] found service account: "default"
	I1004 04:28:51.438957   66755 default_sa.go:55] duration metric: took 197.410206ms for default service account to be created ...
	I1004 04:28:51.438966   66755 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:28:51.642064   66755 system_pods.go:86] 9 kube-system pods found
	I1004 04:28:51.642091   66755 system_pods.go:89] "coredns-7c65d6cfc9-h5tbr" [87deb61f-2ce4-4d45-91da-c16557b5ef75] Running
	I1004 04:28:51.642095   66755 system_pods.go:89] "coredns-7c65d6cfc9-p52s6" [b9b3cd7f-f28e-4502-a55d-7792cfa5a6fe] Running
	I1004 04:28:51.642100   66755 system_pods.go:89] "etcd-embed-certs-934812" [487917e4-1b38-4b84-baf9-eacc1113718e] Running
	I1004 04:28:51.642103   66755 system_pods.go:89] "kube-apiserver-embed-certs-934812" [7fcfc483-3c53-415d-8329-5cae1ecd022f] Running
	I1004 04:28:51.642107   66755 system_pods.go:89] "kube-controller-manager-embed-certs-934812" [9ab25b16-916b-49f7-b34d-a12cf8552769] Running
	I1004 04:28:51.642111   66755 system_pods.go:89] "kube-proxy-9czbc" [dedff5a2-62b6-49c3-8369-9182d1c5bf7a] Running
	I1004 04:28:51.642115   66755 system_pods.go:89] "kube-scheduler-embed-certs-934812" [88a5acd5-108f-482a-ac24-7b75a30428ff] Running
	I1004 04:28:51.642121   66755 system_pods.go:89] "metrics-server-6867b74b74-fh2lk" [12e3e884-2ad3-4eaa-a505-822717e5bc8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:51.642124   66755 system_pods.go:89] "storage-provisioner" [67b4ef22-068c-4d14-840e-deab91c5ab94] Running
	I1004 04:28:51.642133   66755 system_pods.go:126] duration metric: took 203.1616ms to wait for k8s-apps to be running ...
	I1004 04:28:51.642139   66755 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:28:51.642176   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:51.658916   66755 system_svc.go:56] duration metric: took 16.763146ms WaitForService to wait for kubelet
	I1004 04:28:51.658948   66755 kubeadm.go:582] duration metric: took 8.303579518s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:28:51.658964   66755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:28:51.839048   66755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:28:51.839067   66755 node_conditions.go:123] node cpu capacity is 2
	I1004 04:28:51.839076   66755 node_conditions.go:105] duration metric: took 180.108785ms to run NodePressure ...
	I1004 04:28:51.839086   66755 start.go:241] waiting for startup goroutines ...
	I1004 04:28:51.839093   66755 start.go:246] waiting for cluster config update ...
	I1004 04:28:51.839103   66755 start.go:255] writing updated cluster config ...
	I1004 04:28:51.839343   66755 ssh_runner.go:195] Run: rm -f paused
	I1004 04:28:51.887283   66755 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:28:51.889326   66755 out.go:177] * Done! kubectl is now configured to use "embed-certs-934812" cluster and "default" namespace by default
	I1004 04:28:48.765066   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:48.780955   67541 api_server.go:72] duration metric: took 4m18.802753607s to wait for apiserver process to appear ...
	I1004 04:28:48.780988   67541 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:28:48.781022   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:48.781074   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:48.817315   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:48.817337   67541 cri.go:89] found id: ""
	I1004 04:28:48.817346   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:48.817406   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.821619   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:48.821676   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:48.860019   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:48.860043   67541 cri.go:89] found id: ""
	I1004 04:28:48.860052   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:48.860101   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.864005   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:48.864065   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:48.901273   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:48.901295   67541 cri.go:89] found id: ""
	I1004 04:28:48.901303   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:48.901353   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.905950   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:48.906007   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:48.939708   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:48.939735   67541 cri.go:89] found id: ""
	I1004 04:28:48.939745   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:48.939812   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.943625   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:48.943692   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:48.979452   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:48.979481   67541 cri.go:89] found id: ""
	I1004 04:28:48.979490   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:48.979550   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.983629   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:48.983692   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:49.021137   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:49.021160   67541 cri.go:89] found id: ""
	I1004 04:28:49.021169   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:49.021242   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.025644   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:49.025712   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:49.062410   67541 cri.go:89] found id: ""
	I1004 04:28:49.062437   67541 logs.go:282] 0 containers: []
	W1004 04:28:49.062447   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:49.062452   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:49.062499   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:49.098959   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:49.098990   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:49.098996   67541 cri.go:89] found id: ""
	I1004 04:28:49.099005   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:49.099067   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.103474   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.107824   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:49.107852   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:49.228249   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:49.228278   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:49.269454   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:49.269479   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:49.305639   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:49.305666   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:49.770318   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:49.770348   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:49.808468   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:49.808493   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:49.884965   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:49.884997   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:49.901874   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:49.901898   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:49.952844   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:49.952869   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:49.986100   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:49.986141   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:50.023082   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:50.023108   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:50.074848   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:50.074876   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:50.112513   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:50.112541   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:52.658644   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:28:52.663076   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 200:
	ok
	I1004 04:28:52.663997   67541 api_server.go:141] control plane version: v1.31.1
	I1004 04:28:52.664017   67541 api_server.go:131] duration metric: took 3.8830221s to wait for apiserver health ...
	I1004 04:28:52.664024   67541 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:28:52.664045   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:52.664085   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:52.704174   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:52.704193   67541 cri.go:89] found id: ""
	I1004 04:28:52.704200   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:52.704253   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.708388   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:52.708438   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:52.743028   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:52.743053   67541 cri.go:89] found id: ""
	I1004 04:28:52.743062   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:52.743108   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.747354   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:52.747405   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:52.782350   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:52.782373   67541 cri.go:89] found id: ""
	I1004 04:28:52.782382   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:52.782424   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.786336   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:52.786394   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:52.826929   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:52.826950   67541 cri.go:89] found id: ""
	I1004 04:28:52.826958   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:52.827018   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.831039   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:52.831094   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:52.865963   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:52.865984   67541 cri.go:89] found id: ""
	I1004 04:28:52.865992   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:52.866032   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.869982   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:52.870024   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:52.919060   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:52.919081   67541 cri.go:89] found id: ""
	I1004 04:28:52.919091   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:52.919139   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.923080   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:52.923131   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:52.962615   67541 cri.go:89] found id: ""
	I1004 04:28:52.962636   67541 logs.go:282] 0 containers: []
	W1004 04:28:52.962643   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:52.962649   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:52.962706   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:52.999914   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:52.999936   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:52.999940   67541 cri.go:89] found id: ""
	I1004 04:28:52.999947   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:52.999998   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:53.003894   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:53.007759   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:53.007776   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:53.021269   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:53.021289   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:53.088683   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:53.088711   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:53.127363   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:53.127387   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:53.163467   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:53.163490   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:53.212683   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:53.212717   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:49.123892   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:51.124121   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:53.124323   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:49.686881   67282 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:28:49.687234   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:28:49.687487   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:28:53.569320   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:53.569360   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:53.644197   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:53.644231   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:53.747465   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:53.747497   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:53.788761   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:53.788798   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:53.822705   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:53.822737   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:53.857525   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:53.857548   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:53.894880   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:53.894904   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:56.455254   67541 system_pods.go:59] 8 kube-system pods found
	I1004 04:28:56.455286   67541 system_pods.go:61] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running
	I1004 04:28:56.455293   67541 system_pods.go:61] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running
	I1004 04:28:56.455299   67541 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running
	I1004 04:28:56.455304   67541 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running
	I1004 04:28:56.455309   67541 system_pods.go:61] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:28:56.455314   67541 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running
	I1004 04:28:56.455322   67541 system_pods.go:61] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:56.455329   67541 system_pods.go:61] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:28:56.455338   67541 system_pods.go:74] duration metric: took 3.791308758s to wait for pod list to return data ...
	I1004 04:28:56.455347   67541 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:28:56.457799   67541 default_sa.go:45] found service account: "default"
	I1004 04:28:56.457817   67541 default_sa.go:55] duration metric: took 2.463452ms for default service account to be created ...
	I1004 04:28:56.457825   67541 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:28:56.462569   67541 system_pods.go:86] 8 kube-system pods found
	I1004 04:28:56.462593   67541 system_pods.go:89] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running
	I1004 04:28:56.462601   67541 system_pods.go:89] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running
	I1004 04:28:56.462608   67541 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running
	I1004 04:28:56.462615   67541 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running
	I1004 04:28:56.462620   67541 system_pods.go:89] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:28:56.462626   67541 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running
	I1004 04:28:56.462632   67541 system_pods.go:89] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:56.462637   67541 system_pods.go:89] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:28:56.462645   67541 system_pods.go:126] duration metric: took 4.814032ms to wait for k8s-apps to be running ...
	I1004 04:28:56.462657   67541 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:28:56.462749   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:56.478944   67541 system_svc.go:56] duration metric: took 16.282384ms WaitForService to wait for kubelet
	I1004 04:28:56.478966   67541 kubeadm.go:582] duration metric: took 4m26.500769346s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:28:56.478982   67541 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:28:56.481946   67541 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:28:56.481968   67541 node_conditions.go:123] node cpu capacity is 2
	I1004 04:28:56.481980   67541 node_conditions.go:105] duration metric: took 2.992423ms to run NodePressure ...
	I1004 04:28:56.481993   67541 start.go:241] waiting for startup goroutines ...
	I1004 04:28:56.482006   67541 start.go:246] waiting for cluster config update ...
	I1004 04:28:56.482018   67541 start.go:255] writing updated cluster config ...
	I1004 04:28:56.482450   67541 ssh_runner.go:195] Run: rm -f paused
	I1004 04:28:56.528299   67541 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:28:56.530289   67541 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-281471" cluster and "default" namespace by default
	I1004 04:28:55.625569   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:58.122544   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:54.687773   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:28:54.688026   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:29:00.124374   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:02.624622   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:05.123726   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:07.622036   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:04.688599   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:29:04.688808   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:29:09.623060   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:11.623590   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:12.123919   66293 pod_ready.go:82] duration metric: took 4m0.007496621s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	E1004 04:29:12.123939   66293 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 04:29:12.123946   66293 pod_ready.go:39] duration metric: took 4m3.607239118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:29:12.123960   66293 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:29:12.123985   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:12.124023   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:12.174748   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:12.174767   66293 cri.go:89] found id: ""
	I1004 04:29:12.174775   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:12.174823   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.179374   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:12.179436   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:12.219617   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:12.219637   66293 cri.go:89] found id: ""
	I1004 04:29:12.219646   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:12.219699   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.223774   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:12.223844   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:12.261339   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:12.261360   66293 cri.go:89] found id: ""
	I1004 04:29:12.261369   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:12.261424   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.265364   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:12.265414   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:12.313178   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:12.313197   66293 cri.go:89] found id: ""
	I1004 04:29:12.313206   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:12.313271   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.317440   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:12.317498   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:12.353037   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:12.353054   66293 cri.go:89] found id: ""
	I1004 04:29:12.353072   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:12.353125   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.357212   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:12.357272   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:12.392082   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:12.392106   66293 cri.go:89] found id: ""
	I1004 04:29:12.392115   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:12.392167   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.396333   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:12.396395   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:12.439298   66293 cri.go:89] found id: ""
	I1004 04:29:12.439329   66293 logs.go:282] 0 containers: []
	W1004 04:29:12.439337   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:12.439343   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:12.439387   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:12.478798   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:12.478814   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:12.478818   66293 cri.go:89] found id: ""
	I1004 04:29:12.478824   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:12.478866   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.483035   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.486977   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:12.486992   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:12.520849   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:12.520875   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:13.072628   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:13.072671   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:13.137973   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:13.138000   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:13.259585   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:13.259611   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:13.312315   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:13.312340   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:13.352351   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:13.352377   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:13.391319   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:13.391352   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:13.430681   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:13.430712   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:13.464929   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:13.464957   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:13.505312   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:13.505340   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:13.520476   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:13.520517   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:13.582723   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:13.582752   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:16.131437   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:29:16.150426   66293 api_server.go:72] duration metric: took 4m14.921074088s to wait for apiserver process to appear ...
	I1004 04:29:16.150457   66293 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:29:16.150498   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:16.150559   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:16.197236   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:16.197265   66293 cri.go:89] found id: ""
	I1004 04:29:16.197275   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:16.197341   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.202103   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:16.202187   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:16.236881   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:16.236907   66293 cri.go:89] found id: ""
	I1004 04:29:16.236916   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:16.236976   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.241220   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:16.241289   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:16.275727   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:16.275750   66293 cri.go:89] found id: ""
	I1004 04:29:16.275759   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:16.275828   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.280282   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:16.280352   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:16.320297   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:16.320323   66293 cri.go:89] found id: ""
	I1004 04:29:16.320332   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:16.320386   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.324982   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:16.325038   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:16.367062   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:16.367081   66293 cri.go:89] found id: ""
	I1004 04:29:16.367089   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:16.367143   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.371124   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:16.371182   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:16.405706   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:16.405728   66293 cri.go:89] found id: ""
	I1004 04:29:16.405738   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:16.405785   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.410027   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:16.410084   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:16.444937   66293 cri.go:89] found id: ""
	I1004 04:29:16.444961   66293 logs.go:282] 0 containers: []
	W1004 04:29:16.444971   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:16.444978   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:16.445032   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:16.480123   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:16.480153   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:16.480160   66293 cri.go:89] found id: ""
	I1004 04:29:16.480168   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:16.480228   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.484216   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.488156   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:16.488177   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:16.501573   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:16.501591   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:16.600789   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:16.600814   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:16.641604   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:16.641634   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:16.696735   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:16.696764   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:16.737153   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:16.737177   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:17.188490   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:17.188546   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:17.262072   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:17.262108   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:17.310881   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:17.310911   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:17.356105   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:17.356135   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:17.398916   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:17.398948   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:17.440122   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:17.440149   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:17.482529   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:17.482553   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:20.034163   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:29:20.039165   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I1004 04:29:20.040105   66293 api_server.go:141] control plane version: v1.31.1
	I1004 04:29:20.040124   66293 api_server.go:131] duration metric: took 3.889660333s to wait for apiserver health ...
	I1004 04:29:20.040131   66293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:29:20.040156   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:20.040203   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:20.078208   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:20.078234   66293 cri.go:89] found id: ""
	I1004 04:29:20.078244   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:20.078306   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.082751   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:20.082808   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:20.128002   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:20.128024   66293 cri.go:89] found id: ""
	I1004 04:29:20.128034   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:20.128084   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.132039   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:20.132097   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:20.171887   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:20.171911   66293 cri.go:89] found id: ""
	I1004 04:29:20.171921   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:20.171978   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.176095   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:20.176150   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:20.215155   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:20.215175   66293 cri.go:89] found id: ""
	I1004 04:29:20.215183   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:20.215241   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.219738   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:20.219814   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:20.256116   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:20.256134   66293 cri.go:89] found id: ""
	I1004 04:29:20.256142   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:20.256194   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.261201   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:20.261281   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:20.302328   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:20.302350   66293 cri.go:89] found id: ""
	I1004 04:29:20.302359   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:20.302414   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.306488   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:20.306551   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:20.341266   66293 cri.go:89] found id: ""
	I1004 04:29:20.341290   66293 logs.go:282] 0 containers: []
	W1004 04:29:20.341300   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:20.341307   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:20.341361   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:20.379560   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:20.379584   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:20.379589   66293 cri.go:89] found id: ""
	I1004 04:29:20.379598   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:20.379653   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.383816   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.388118   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:20.388137   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:20.487661   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:20.487686   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:20.539728   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:20.539754   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:20.577435   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:20.577463   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:20.616450   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:20.616480   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:20.658292   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:20.658316   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:20.733483   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:20.733515   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:20.749004   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:20.749033   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:20.799355   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:20.799383   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:20.839676   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:20.839699   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:20.874870   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:20.874896   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:20.912635   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:20.912658   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:20.968377   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:20.968405   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:23.820462   66293 system_pods.go:59] 8 kube-system pods found
	I1004 04:29:23.820491   66293 system_pods.go:61] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running
	I1004 04:29:23.820497   66293 system_pods.go:61] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running
	I1004 04:29:23.820501   66293 system_pods.go:61] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running
	I1004 04:29:23.820506   66293 system_pods.go:61] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running
	I1004 04:29:23.820514   66293 system_pods.go:61] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running
	I1004 04:29:23.820517   66293 system_pods.go:61] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running
	I1004 04:29:23.820524   66293 system_pods.go:61] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:29:23.820529   66293 system_pods.go:61] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running
	I1004 04:29:23.820537   66293 system_pods.go:74] duration metric: took 3.780400092s to wait for pod list to return data ...
	I1004 04:29:23.820544   66293 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:29:23.823119   66293 default_sa.go:45] found service account: "default"
	I1004 04:29:23.823137   66293 default_sa.go:55] duration metric: took 2.58707ms for default service account to be created ...
	I1004 04:29:23.823144   66293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:29:23.827365   66293 system_pods.go:86] 8 kube-system pods found
	I1004 04:29:23.827385   66293 system_pods.go:89] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running
	I1004 04:29:23.827389   66293 system_pods.go:89] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running
	I1004 04:29:23.827393   66293 system_pods.go:89] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running
	I1004 04:29:23.827397   66293 system_pods.go:89] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running
	I1004 04:29:23.827400   66293 system_pods.go:89] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running
	I1004 04:29:23.827405   66293 system_pods.go:89] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running
	I1004 04:29:23.827410   66293 system_pods.go:89] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:29:23.827415   66293 system_pods.go:89] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running
	I1004 04:29:23.827422   66293 system_pods.go:126] duration metric: took 4.27475ms to wait for k8s-apps to be running ...
	I1004 04:29:23.827428   66293 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:29:23.827468   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:29:23.844696   66293 system_svc.go:56] duration metric: took 17.261418ms WaitForService to wait for kubelet
	I1004 04:29:23.844724   66293 kubeadm.go:582] duration metric: took 4m22.61537826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:29:23.844746   66293 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:29:23.847873   66293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:29:23.847892   66293 node_conditions.go:123] node cpu capacity is 2
	I1004 04:29:23.847902   66293 node_conditions.go:105] duration metric: took 3.149916ms to run NodePressure ...
	I1004 04:29:23.847915   66293 start.go:241] waiting for startup goroutines ...
	I1004 04:29:23.847923   66293 start.go:246] waiting for cluster config update ...
	I1004 04:29:23.847932   66293 start.go:255] writing updated cluster config ...
	I1004 04:29:23.848202   66293 ssh_runner.go:195] Run: rm -f paused
	I1004 04:29:23.894092   66293 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:29:23.895736   66293 out.go:177] * Done! kubectl is now configured to use "no-preload-658545" cluster and "default" namespace by default
	I1004 04:29:24.690241   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:29:24.690419   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:04.692816   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:04.693091   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:04.693114   67282 kubeadm.go:310] 
	I1004 04:30:04.693149   67282 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:30:04.693214   67282 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:30:04.693236   67282 kubeadm.go:310] 
	I1004 04:30:04.693295   67282 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:30:04.693327   67282 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:30:04.693451   67282 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:30:04.693460   67282 kubeadm.go:310] 
	I1004 04:30:04.693568   67282 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:30:04.693614   67282 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:30:04.693668   67282 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:30:04.693688   67282 kubeadm.go:310] 
	I1004 04:30:04.693843   67282 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:30:04.693966   67282 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:30:04.693982   67282 kubeadm.go:310] 
	I1004 04:30:04.694097   67282 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:30:04.694218   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:30:04.694305   67282 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:30:04.694387   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:30:04.694399   67282 kubeadm.go:310] 
	I1004 04:30:04.695379   67282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:30:04.695478   67282 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:30:04.695566   67282 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1004 04:30:04.695695   67282 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1004 04:30:04.695742   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:30:05.153635   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:30:05.170057   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:30:05.179541   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:30:05.179563   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:30:05.179611   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:30:05.188969   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:30:05.189025   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:30:05.198049   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:30:05.207031   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:30:05.207118   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:30:05.216934   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:30:05.226477   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:30:05.226541   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:30:05.236222   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:30:05.245314   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:30:05.245374   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:30:05.255762   67282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:30:05.329816   67282 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:30:05.329953   67282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:30:05.482342   67282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:30:05.482549   67282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:30:05.482692   67282 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:30:05.666400   67282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:30:05.668115   67282 out.go:235]   - Generating certificates and keys ...
	I1004 04:30:05.668217   67282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:30:05.668319   67282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:30:05.668460   67282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:30:05.668562   67282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:30:05.668660   67282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:30:05.668734   67282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:30:05.668823   67282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:30:05.668905   67282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:30:05.669010   67282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:30:05.669130   67282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:30:05.669186   67282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:30:05.669269   67282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:30:05.773446   67282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:30:05.823736   67282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:30:05.951294   67282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:30:06.250340   67282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:30:06.275797   67282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:30:06.276877   67282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:30:06.276944   67282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:30:06.437286   67282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:30:06.438849   67282 out.go:235]   - Booting up control plane ...
	I1004 04:30:06.438952   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:30:06.443688   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:30:06.444596   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:30:06.445267   67282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:30:06.457334   67282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:30:46.456706   67282 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:30:46.456854   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:46.457117   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:51.456986   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:51.457240   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:31:01.457062   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:31:01.457288   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:31:21.456976   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:31:21.457277   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:32:01.456978   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:32:01.457225   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:32:01.457249   67282 kubeadm.go:310] 
	I1004 04:32:01.457312   67282 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:32:01.457374   67282 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:32:01.457383   67282 kubeadm.go:310] 
	I1004 04:32:01.457434   67282 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:32:01.457512   67282 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:32:01.457678   67282 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:32:01.457692   67282 kubeadm.go:310] 
	I1004 04:32:01.457838   67282 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:32:01.457892   67282 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:32:01.457946   67282 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:32:01.457957   67282 kubeadm.go:310] 
	I1004 04:32:01.458102   67282 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:32:01.458217   67282 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:32:01.458233   67282 kubeadm.go:310] 
	I1004 04:32:01.458379   67282 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:32:01.458494   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:32:01.458604   67282 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:32:01.458699   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:32:01.458710   67282 kubeadm.go:310] 
	I1004 04:32:01.459157   67282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:32:01.459272   67282 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:32:01.459386   67282 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1004 04:32:01.459464   67282 kubeadm.go:394] duration metric: took 7m57.553695137s to StartCluster
	I1004 04:32:01.459522   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:32:01.459586   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:32:01.500997   67282 cri.go:89] found id: ""
	I1004 04:32:01.501026   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.501037   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:32:01.501044   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:32:01.501102   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:32:01.537240   67282 cri.go:89] found id: ""
	I1004 04:32:01.537276   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.537288   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:32:01.537295   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:32:01.537349   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:32:01.573959   67282 cri.go:89] found id: ""
	I1004 04:32:01.573995   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.574007   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:32:01.574013   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:32:01.574074   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:32:01.610614   67282 cri.go:89] found id: ""
	I1004 04:32:01.610645   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.610657   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:32:01.610665   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:32:01.610716   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:32:01.645520   67282 cri.go:89] found id: ""
	I1004 04:32:01.645554   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.645567   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:32:01.645574   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:32:01.645640   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:32:01.679787   67282 cri.go:89] found id: ""
	I1004 04:32:01.679814   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.679823   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:32:01.679828   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:32:01.679873   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:32:01.714860   67282 cri.go:89] found id: ""
	I1004 04:32:01.714883   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.714891   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:32:01.714897   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:32:01.714952   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:32:01.761170   67282 cri.go:89] found id: ""
	I1004 04:32:01.761198   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.761208   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:32:01.761220   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:32:01.761232   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:32:01.822966   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:32:01.823006   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:32:01.839482   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:32:01.839510   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:32:01.917863   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:32:01.917887   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:32:01.917901   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:32:02.027216   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:32:02.027247   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1004 04:32:02.069804   67282 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1004 04:32:02.069852   67282 out.go:270] * 
	W1004 04:32:02.069922   67282 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:32:02.069939   67282 out.go:270] * 
	W1004 04:32:02.070740   67282 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 04:32:02.074308   67282 out.go:201] 
	W1004 04:32:02.075387   67282 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:32:02.075427   67282 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1004 04:32:02.075458   67282 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1004 04:32:02.076675   67282 out.go:201] 
	
	
	==> CRI-O <==
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.172417329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016867172397749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c956105-9dd3-4474-b710-949174c6f771 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.173143696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7f7e19e-4056-453d-83a1-3428d8049906 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.173199523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7f7e19e-4056-453d-83a1-3428d8049906 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.173233989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c7f7e19e-4056-453d-83a1-3428d8049906 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.203699978Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dff2ebd4-802a-4566-babb-8f24f70b600b name=/runtime.v1.RuntimeService/Version
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.203774403Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dff2ebd4-802a-4566-babb-8f24f70b600b name=/runtime.v1.RuntimeService/Version
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.205027685Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=faf83bd9-4c76-4e08-afb5-a217e4060eb6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.205402244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016867205382015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=faf83bd9-4c76-4e08-afb5-a217e4060eb6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.206038673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=856af4d7-6899-4eef-a6a7-f56a0e14da9f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.206092295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=856af4d7-6899-4eef-a6a7-f56a0e14da9f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.206121032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=856af4d7-6899-4eef-a6a7-f56a0e14da9f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.238731066Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ffa39449-e948-4dbd-81b6-879ab86057f7 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.238810045Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffa39449-e948-4dbd-81b6-879ab86057f7 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.240215488Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8311d961-66ce-4944-984d-61d3d478f456 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.240639400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016867240615415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8311d961-66ce-4944-984d-61d3d478f456 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.241405968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c715544-0fe3-433e-8bd9-0e83a0c9bd95 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.241459439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c715544-0fe3-433e-8bd9-0e83a0c9bd95 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.241490527Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1c715544-0fe3-433e-8bd9-0e83a0c9bd95 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.273815544Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e94de548-534f-4066-90f5-5304335a9039 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.273911875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e94de548-534f-4066-90f5-5304335a9039 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.275596122Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e94a5a4c-a8da-47b1-918f-67ff599a27c3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.276148071Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016867276114389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e94a5a4c-a8da-47b1-918f-67ff599a27c3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.276755346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4878fd21-eb6e-4596-ad0b-7fab142ceb3b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.276831095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4878fd21-eb6e-4596-ad0b-7fab142ceb3b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:41:07 old-k8s-version-420062 crio[636]: time="2024-10-04 04:41:07.276868254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4878fd21-eb6e-4596-ad0b-7fab142ceb3b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 4 04:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057605] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040409] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.074027] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.556132] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.574130] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.887139] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.071312] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072511] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.216496] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.132348] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.289222] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[Oct 4 04:24] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.060637] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.786232] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[ +11.909104] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 4 04:28] systemd-fstab-generator[5073]: Ignoring "noauto" option for root device
	[Oct 4 04:30] systemd-fstab-generator[5352]: Ignoring "noauto" option for root device
	[  +0.068575] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 04:41:07 up 17 min,  0 users,  load average: 0.02, 0.04, 0.02
	Linux old-k8s-version-420062 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000c6a480, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b3baa0, 0x24, 0x0, ...)
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]: net.(*Dialer).DialContext(0xc0008e2300, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b3baa0, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0009543c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b3baa0, 0x24, 0x60, 0x7fdb80a9e5f8, 0x118, ...)
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]: net/http.(*Transport).dial(0xc0009c3a40, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b3baa0, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]: net/http.(*Transport).dialConn(0xc0009c3a40, 0x4f7fe00, 0xc000120018, 0x0, 0xc0005c7f80, 0x5, 0xc000b3baa0, 0x24, 0x0, 0xc0009fbc20, ...)
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]: net/http.(*Transport).dialConnFor(0xc0009c3a40, 0xc000b1bd90)
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]: created by net/http.(*Transport).queueForDial
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6519]: E1004 04:41:02.098720    6519 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dold-k8s-version-420062&limit=500&resourceVersion=0": dial tcp 192.168.50.146:8443: connect: connection refused
	Oct 04 04:41:02 old-k8s-version-420062 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 04 04:41:02 old-k8s-version-420062 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 04 04:41:02 old-k8s-version-420062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Oct 04 04:41:02 old-k8s-version-420062 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 04 04:41:02 old-k8s-version-420062 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6528]: I1004 04:41:02.834035    6528 server.go:416] Version: v1.20.0
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6528]: I1004 04:41:02.834436    6528 server.go:837] Client rotation is on, will bootstrap in background
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6528]: I1004 04:41:02.837045    6528 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6528]: W1004 04:41:02.838368    6528 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 04 04:41:02 old-k8s-version-420062 kubelet[6528]: I1004 04:41:02.838429    6528 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-420062 -n old-k8s-version-420062
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-420062 -n old-k8s-version-420062: exit status 2 (231.404145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-420062" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (433.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-934812 -n embed-certs-934812
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-04 04:45:05.364288986 +0000 UTC m=+7024.297229546
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-934812 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-934812 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.663µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-934812 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-934812 -n embed-certs-934812
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-934812 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-934812 logs -n 25: (3.293193292s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617497                  | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617497 --memory=2200 --alsologtostderr   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-617497 image list                           | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:18 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658545                  | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281471  | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-420062        | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-934812                 | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:19 UTC | 04 Oct 24 04:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-420062             | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281471       | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC | 04 Oct 24 04:28 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:44 UTC | 04 Oct 24 04:44 UTC |
	| start   | -p auto-204413 --memory=3072                           | auto-204413                  | jenkins | v1.34.0 | 04 Oct 24 04:44 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:44 UTC | 04 Oct 24 04:44 UTC |
	| start   | -p kindnet-204413                                      | kindnet-204413               | jenkins | v1.34.0 | 04 Oct 24 04:44 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 04:44:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 04:44:13.966426   73894 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:44:13.966549   73894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:44:13.966557   73894 out.go:358] Setting ErrFile to fd 2...
	I1004 04:44:13.966561   73894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:44:13.966710   73894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:44:13.967328   73894 out.go:352] Setting JSON to false
	I1004 04:44:13.968287   73894 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8799,"bootTime":1728008255,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:44:13.968382   73894 start.go:139] virtualization: kvm guest
	I1004 04:44:13.970388   73894 out.go:177] * [kindnet-204413] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:44:13.971642   73894 notify.go:220] Checking for updates...
	I1004 04:44:13.971663   73894 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:44:13.972988   73894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:44:13.974200   73894 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:44:13.975573   73894 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:44:13.976881   73894 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:44:13.978084   73894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:44:13.979972   73894 config.go:182] Loaded profile config "auto-204413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:44:13.980121   73894 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:44:13.980251   73894 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:44:13.980362   73894 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:44:14.018133   73894 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 04:44:14.019235   73894 start.go:297] selected driver: kvm2
	I1004 04:44:14.019252   73894 start.go:901] validating driver "kvm2" against <nil>
	I1004 04:44:14.019267   73894 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:44:14.020329   73894 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:44:14.020431   73894 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:44:14.036100   73894 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:44:14.036153   73894 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 04:44:14.036458   73894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:44:14.036499   73894 cni.go:84] Creating CNI manager for "kindnet"
	I1004 04:44:14.036506   73894 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 04:44:14.036573   73894 start.go:340] cluster config:
	{Name:kindnet-204413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:44:14.036695   73894 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:44:14.038581   73894 out.go:177] * Starting "kindnet-204413" primary control-plane node in "kindnet-204413" cluster
	I1004 04:44:13.689605   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:13.690143   73561 main.go:141] libmachine: (auto-204413) DBG | unable to find current IP address of domain auto-204413 in network mk-auto-204413
	I1004 04:44:13.690171   73561 main.go:141] libmachine: (auto-204413) DBG | I1004 04:44:13.690090   73583 retry.go:31] will retry after 1.148555671s: waiting for machine to come up
	I1004 04:44:14.840862   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:14.841260   73561 main.go:141] libmachine: (auto-204413) DBG | unable to find current IP address of domain auto-204413 in network mk-auto-204413
	I1004 04:44:14.841291   73561 main.go:141] libmachine: (auto-204413) DBG | I1004 04:44:14.841212   73583 retry.go:31] will retry after 1.068925325s: waiting for machine to come up
	I1004 04:44:15.911374   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:15.911856   73561 main.go:141] libmachine: (auto-204413) DBG | unable to find current IP address of domain auto-204413 in network mk-auto-204413
	I1004 04:44:15.911883   73561 main.go:141] libmachine: (auto-204413) DBG | I1004 04:44:15.911818   73583 retry.go:31] will retry after 1.205954674s: waiting for machine to come up
	I1004 04:44:17.119260   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:17.119691   73561 main.go:141] libmachine: (auto-204413) DBG | unable to find current IP address of domain auto-204413 in network mk-auto-204413
	I1004 04:44:17.119720   73561 main.go:141] libmachine: (auto-204413) DBG | I1004 04:44:17.119645   73583 retry.go:31] will retry after 1.491328038s: waiting for machine to come up
	I1004 04:44:18.613514   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:18.613983   73561 main.go:141] libmachine: (auto-204413) DBG | unable to find current IP address of domain auto-204413 in network mk-auto-204413
	I1004 04:44:18.614011   73561 main.go:141] libmachine: (auto-204413) DBG | I1004 04:44:18.613944   73583 retry.go:31] will retry after 2.202765723s: waiting for machine to come up
	I1004 04:44:14.039841   73894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:44:14.039888   73894 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 04:44:14.039900   73894 cache.go:56] Caching tarball of preloaded images
	I1004 04:44:14.040059   73894 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:44:14.040079   73894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 04:44:14.040187   73894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kindnet-204413/config.json ...
	I1004 04:44:14.040210   73894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kindnet-204413/config.json: {Name:mk583d5674c8eb8c08e4b3749769233cd9cff42b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:44:14.040354   73894 start.go:360] acquireMachinesLock for kindnet-204413: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:44:20.819339   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:20.819734   73561 main.go:141] libmachine: (auto-204413) DBG | unable to find current IP address of domain auto-204413 in network mk-auto-204413
	I1004 04:44:20.819764   73561 main.go:141] libmachine: (auto-204413) DBG | I1004 04:44:20.819726   73583 retry.go:31] will retry after 3.592070847s: waiting for machine to come up
	I1004 04:44:24.413022   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:24.413501   73561 main.go:141] libmachine: (auto-204413) DBG | unable to find current IP address of domain auto-204413 in network mk-auto-204413
	I1004 04:44:24.413524   73561 main.go:141] libmachine: (auto-204413) DBG | I1004 04:44:24.413438   73583 retry.go:31] will retry after 4.239833406s: waiting for machine to come up
	I1004 04:44:28.657778   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:28.658222   73561 main.go:141] libmachine: (auto-204413) DBG | unable to find current IP address of domain auto-204413 in network mk-auto-204413
	I1004 04:44:28.658255   73561 main.go:141] libmachine: (auto-204413) DBG | I1004 04:44:28.658196   73583 retry.go:31] will retry after 5.444256713s: waiting for machine to come up
	I1004 04:44:35.713073   73894 start.go:364] duration metric: took 21.672679179s to acquireMachinesLock for "kindnet-204413"
	I1004 04:44:35.713137   73894 start.go:93] Provisioning new machine with config: &{Name:kindnet-204413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:44:35.713295   73894 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 04:44:34.103880   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.104375   73561 main.go:141] libmachine: (auto-204413) Found IP for machine: 192.168.50.148
	I1004 04:44:34.104396   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has current primary IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.104401   73561 main.go:141] libmachine: (auto-204413) Reserving static IP address...
	I1004 04:44:34.104776   73561 main.go:141] libmachine: (auto-204413) DBG | unable to find host DHCP lease matching {name: "auto-204413", mac: "52:54:00:bf:c2:f7", ip: "192.168.50.148"} in network mk-auto-204413
	I1004 04:44:34.182521   73561 main.go:141] libmachine: (auto-204413) DBG | Getting to WaitForSSH function...
	I1004 04:44:34.182554   73561 main.go:141] libmachine: (auto-204413) Reserved static IP address: 192.168.50.148
	I1004 04:44:34.182569   73561 main.go:141] libmachine: (auto-204413) Waiting for SSH to be available...
	I1004 04:44:34.185209   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.185667   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:34.185704   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.185838   73561 main.go:141] libmachine: (auto-204413) DBG | Using SSH client type: external
	I1004 04:44:34.185853   73561 main.go:141] libmachine: (auto-204413) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/auto-204413/id_rsa (-rw-------)
	I1004 04:44:34.185880   73561 main.go:141] libmachine: (auto-204413) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/auto-204413/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:44:34.185905   73561 main.go:141] libmachine: (auto-204413) DBG | About to run SSH command:
	I1004 04:44:34.185918   73561 main.go:141] libmachine: (auto-204413) DBG | exit 0
	I1004 04:44:34.316162   73561 main.go:141] libmachine: (auto-204413) DBG | SSH cmd err, output: <nil>: 
	I1004 04:44:34.316440   73561 main.go:141] libmachine: (auto-204413) KVM machine creation complete!
	I1004 04:44:34.316771   73561 main.go:141] libmachine: (auto-204413) Calling .GetConfigRaw
	I1004 04:44:34.317327   73561 main.go:141] libmachine: (auto-204413) Calling .DriverName
	I1004 04:44:34.317534   73561 main.go:141] libmachine: (auto-204413) Calling .DriverName
	I1004 04:44:34.317702   73561 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 04:44:34.317717   73561 main.go:141] libmachine: (auto-204413) Calling .GetState
	I1004 04:44:34.319234   73561 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 04:44:34.319254   73561 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 04:44:34.319259   73561 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 04:44:34.319264   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHHostname
	I1004 04:44:34.321942   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.322342   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:34.322361   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.322610   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHPort
	I1004 04:44:34.322877   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:34.323066   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:34.323279   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHUsername
	I1004 04:44:34.323453   73561 main.go:141] libmachine: Using SSH client type: native
	I1004 04:44:34.323701   73561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I1004 04:44:34.323716   73561 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 04:44:34.439559   73561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:44:34.439585   73561 main.go:141] libmachine: Detecting the provisioner...
	I1004 04:44:34.439593   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHHostname
	I1004 04:44:34.442730   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.443096   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:34.443126   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.443241   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHPort
	I1004 04:44:34.443442   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:34.443681   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:34.443889   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHUsername
	I1004 04:44:34.444070   73561 main.go:141] libmachine: Using SSH client type: native
	I1004 04:44:34.444247   73561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I1004 04:44:34.444267   73561 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 04:44:34.556866   73561 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 04:44:34.556959   73561 main.go:141] libmachine: found compatible host: buildroot
	I1004 04:44:34.556969   73561 main.go:141] libmachine: Provisioning with buildroot...
	I1004 04:44:34.556976   73561 main.go:141] libmachine: (auto-204413) Calling .GetMachineName
	I1004 04:44:34.557196   73561 buildroot.go:166] provisioning hostname "auto-204413"
	I1004 04:44:34.557224   73561 main.go:141] libmachine: (auto-204413) Calling .GetMachineName
	I1004 04:44:34.557426   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHHostname
	I1004 04:44:34.560114   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.560486   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:34.560523   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.560683   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHPort
	I1004 04:44:34.560837   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:34.561012   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:34.561122   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHUsername
	I1004 04:44:34.561273   73561 main.go:141] libmachine: Using SSH client type: native
	I1004 04:44:34.561486   73561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I1004 04:44:34.561505   73561 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-204413 && echo "auto-204413" | sudo tee /etc/hostname
	I1004 04:44:34.686162   73561 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-204413
	
	I1004 04:44:34.686185   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHHostname
	I1004 04:44:34.689166   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.689438   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:34.689463   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.689644   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHPort
	I1004 04:44:34.689817   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:34.690019   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:34.690155   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHUsername
	I1004 04:44:34.690309   73561 main.go:141] libmachine: Using SSH client type: native
	I1004 04:44:34.690484   73561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I1004 04:44:34.690507   73561 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-204413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-204413/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-204413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:44:34.814020   73561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:44:34.814055   73561 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:44:34.814106   73561 buildroot.go:174] setting up certificates
	I1004 04:44:34.814118   73561 provision.go:84] configureAuth start
	I1004 04:44:34.814131   73561 main.go:141] libmachine: (auto-204413) Calling .GetMachineName
	I1004 04:44:34.814423   73561 main.go:141] libmachine: (auto-204413) Calling .GetIP
	I1004 04:44:34.817372   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.817781   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:34.817813   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.817979   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHHostname
	I1004 04:44:34.820423   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.820854   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:34.820881   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:34.821025   73561 provision.go:143] copyHostCerts
	I1004 04:44:34.821110   73561 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:44:34.821122   73561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:44:34.821187   73561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:44:34.821310   73561 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:44:34.821321   73561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:44:34.821346   73561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:44:34.821433   73561 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:44:34.821445   73561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:44:34.821479   73561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:44:34.821525   73561 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.auto-204413 san=[127.0.0.1 192.168.50.148 auto-204413 localhost minikube]
	I1004 04:44:35.039594   73561 provision.go:177] copyRemoteCerts
	I1004 04:44:35.039650   73561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:44:35.039672   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHHostname
	I1004 04:44:35.042906   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.043461   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:35.043491   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.043714   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHPort
	I1004 04:44:35.043933   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:35.044082   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHUsername
	I1004 04:44:35.044319   73561 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/auto-204413/id_rsa Username:docker}
	I1004 04:44:35.130533   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:44:35.156367   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1004 04:44:35.181249   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 04:44:35.207085   73561 provision.go:87] duration metric: took 392.952053ms to configureAuth
	I1004 04:44:35.207118   73561 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:44:35.207274   73561 config.go:182] Loaded profile config "auto-204413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:44:35.207355   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHHostname
	I1004 04:44:35.209880   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.210175   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:35.210199   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.210436   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHPort
	I1004 04:44:35.210641   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:35.210819   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:35.210950   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHUsername
	I1004 04:44:35.211126   73561 main.go:141] libmachine: Using SSH client type: native
	I1004 04:44:35.211384   73561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I1004 04:44:35.211406   73561 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:44:35.448805   73561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:44:35.448844   73561 main.go:141] libmachine: Checking connection to Docker...
	I1004 04:44:35.448855   73561 main.go:141] libmachine: (auto-204413) Calling .GetURL
	I1004 04:44:35.450214   73561 main.go:141] libmachine: (auto-204413) DBG | Using libvirt version 6000000
	I1004 04:44:35.452563   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.452887   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:35.452919   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.453070   73561 main.go:141] libmachine: Docker is up and running!
	I1004 04:44:35.453084   73561 main.go:141] libmachine: Reticulating splines...
	I1004 04:44:35.453092   73561 client.go:171] duration metric: took 26.692777163s to LocalClient.Create
	I1004 04:44:35.453117   73561 start.go:167] duration metric: took 26.692845295s to libmachine.API.Create "auto-204413"
	I1004 04:44:35.453130   73561 start.go:293] postStartSetup for "auto-204413" (driver="kvm2")
	I1004 04:44:35.453141   73561 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:44:35.453162   73561 main.go:141] libmachine: (auto-204413) Calling .DriverName
	I1004 04:44:35.453390   73561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:44:35.453425   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHHostname
	I1004 04:44:35.455369   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.455628   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:35.455657   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.455765   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHPort
	I1004 04:44:35.455955   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:35.456097   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHUsername
	I1004 04:44:35.456233   73561 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/auto-204413/id_rsa Username:docker}
	I1004 04:44:35.545726   73561 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:44:35.550472   73561 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:44:35.550505   73561 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:44:35.550586   73561 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:44:35.550698   73561 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:44:35.550821   73561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:44:35.563069   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:44:35.590090   73561 start.go:296] duration metric: took 136.946158ms for postStartSetup
	I1004 04:44:35.590161   73561 main.go:141] libmachine: (auto-204413) Calling .GetConfigRaw
	I1004 04:44:35.590757   73561 main.go:141] libmachine: (auto-204413) Calling .GetIP
	I1004 04:44:35.593683   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.594087   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:35.594122   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.594420   73561 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/config.json ...
	I1004 04:44:35.594629   73561 start.go:128] duration metric: took 26.851918051s to createHost
	I1004 04:44:35.594653   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHHostname
	I1004 04:44:35.597455   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.597801   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:35.597830   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.597993   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHPort
	I1004 04:44:35.598174   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:35.598381   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:35.598608   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHUsername
	I1004 04:44:35.598769   73561 main.go:141] libmachine: Using SSH client type: native
	I1004 04:44:35.598928   73561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.148 22 <nil> <nil>}
	I1004 04:44:35.598939   73561 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:44:35.712886   73561 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728017075.690277887
	
	I1004 04:44:35.712911   73561 fix.go:216] guest clock: 1728017075.690277887
	I1004 04:44:35.712922   73561 fix.go:229] Guest: 2024-10-04 04:44:35.690277887 +0000 UTC Remote: 2024-10-04 04:44:35.59464318 +0000 UTC m=+26.960098701 (delta=95.634707ms)
	I1004 04:44:35.712971   73561 fix.go:200] guest clock delta is within tolerance: 95.634707ms
	I1004 04:44:35.712982   73561 start.go:83] releasing machines lock for "auto-204413", held for 26.97035874s
	I1004 04:44:35.713035   73561 main.go:141] libmachine: (auto-204413) Calling .DriverName
	I1004 04:44:35.713286   73561 main.go:141] libmachine: (auto-204413) Calling .GetIP
	I1004 04:44:35.716274   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.716664   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:35.716695   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.716845   73561 main.go:141] libmachine: (auto-204413) Calling .DriverName
	I1004 04:44:35.717313   73561 main.go:141] libmachine: (auto-204413) Calling .DriverName
	I1004 04:44:35.717521   73561 main.go:141] libmachine: (auto-204413) Calling .DriverName
	I1004 04:44:35.717618   73561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:44:35.717668   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHHostname
	I1004 04:44:35.717724   73561 ssh_runner.go:195] Run: cat /version.json
	I1004 04:44:35.717750   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHHostname
	I1004 04:44:35.720374   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.720613   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.720766   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:35.720782   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.720944   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:35.720961   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:35.720966   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHPort
	I1004 04:44:35.721139   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:35.721190   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHPort
	I1004 04:44:35.721268   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHUsername
	I1004 04:44:35.721464   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:35.721535   73561 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/auto-204413/id_rsa Username:docker}
	I1004 04:44:35.721641   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHUsername
	I1004 04:44:35.721784   73561 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/auto-204413/id_rsa Username:docker}
	I1004 04:44:35.809240   73561 ssh_runner.go:195] Run: systemctl --version
	I1004 04:44:35.837336   73561 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:44:35.998478   73561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:44:36.006234   73561 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:44:36.006316   73561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:44:36.023164   73561 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:44:36.023191   73561 start.go:495] detecting cgroup driver to use...
	I1004 04:44:36.023274   73561 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:44:36.045841   73561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:44:36.061760   73561 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:44:36.061822   73561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:44:36.077298   73561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:44:36.093412   73561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:44:36.218944   73561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:44:36.378755   73561 docker.go:233] disabling docker service ...
	I1004 04:44:36.378833   73561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:44:36.394558   73561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:44:36.408778   73561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:44:36.563077   73561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:44:36.687639   73561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:44:36.702789   73561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:44:36.723283   73561 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:44:36.723330   73561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:44:36.734621   73561 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:44:36.734684   73561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:44:36.746534   73561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:44:36.758030   73561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:44:36.769351   73561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:44:36.780897   73561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:44:36.792547   73561 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:44:36.814654   73561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:44:36.825482   73561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:44:36.836134   73561 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:44:36.836192   73561 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:44:36.850855   73561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:44:36.863138   73561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:44:36.999340   73561 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:44:37.102927   73561 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:44:37.103017   73561 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:44:37.108245   73561 start.go:563] Will wait 60s for crictl version
	I1004 04:44:37.108313   73561 ssh_runner.go:195] Run: which crictl
	I1004 04:44:37.112564   73561 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:44:37.153446   73561 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:44:37.153549   73561 ssh_runner.go:195] Run: crio --version
	I1004 04:44:37.183892   73561 ssh_runner.go:195] Run: crio --version
	I1004 04:44:37.216693   73561 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:44:37.218308   73561 main.go:141] libmachine: (auto-204413) Calling .GetIP
	I1004 04:44:37.224627   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:37.225051   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:37.225085   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:37.225411   73561 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1004 04:44:37.229918   73561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:44:37.243034   73561 kubeadm.go:883] updating cluster {Name:auto-204413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:auto-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:44:37.243174   73561 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:44:37.243231   73561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:44:37.280037   73561 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:44:37.280109   73561 ssh_runner.go:195] Run: which lz4
	I1004 04:44:37.284514   73561 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:44:37.289274   73561 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:44:37.289311   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:44:35.715590   73894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1004 04:44:35.715803   73894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:44:35.715861   73894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:44:35.734032   73894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37775
	I1004 04:44:35.734604   73894 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:44:35.735241   73894 main.go:141] libmachine: Using API Version  1
	I1004 04:44:35.735265   73894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:44:35.735728   73894 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:44:35.735932   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetMachineName
	I1004 04:44:35.736097   73894 main.go:141] libmachine: (kindnet-204413) Calling .DriverName
	I1004 04:44:35.736349   73894 start.go:159] libmachine.API.Create for "kindnet-204413" (driver="kvm2")
	I1004 04:44:35.736380   73894 client.go:168] LocalClient.Create starting
	I1004 04:44:35.736420   73894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 04:44:35.736464   73894 main.go:141] libmachine: Decoding PEM data...
	I1004 04:44:35.736486   73894 main.go:141] libmachine: Parsing certificate...
	I1004 04:44:35.736568   73894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 04:44:35.736595   73894 main.go:141] libmachine: Decoding PEM data...
	I1004 04:44:35.736609   73894 main.go:141] libmachine: Parsing certificate...
	I1004 04:44:35.736623   73894 main.go:141] libmachine: Running pre-create checks...
	I1004 04:44:35.736634   73894 main.go:141] libmachine: (kindnet-204413) Calling .PreCreateCheck
	I1004 04:44:35.737031   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetConfigRaw
	I1004 04:44:35.737472   73894 main.go:141] libmachine: Creating machine...
	I1004 04:44:35.737486   73894 main.go:141] libmachine: (kindnet-204413) Calling .Create
	I1004 04:44:35.737604   73894 main.go:141] libmachine: (kindnet-204413) Creating KVM machine...
	I1004 04:44:35.739210   73894 main.go:141] libmachine: (kindnet-204413) DBG | found existing default KVM network
	I1004 04:44:35.740871   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:35.740681   74017 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0a:17:6f} reservation:<nil>}
	I1004 04:44:35.742097   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:35.742010   74017 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:30:0f:dc} reservation:<nil>}
	I1004 04:44:35.743008   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:35.742929   74017 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:67:82:fc} reservation:<nil>}
	I1004 04:44:35.744265   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:35.744178   74017 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000323980}
	I1004 04:44:35.744314   73894 main.go:141] libmachine: (kindnet-204413) DBG | created network xml: 
	I1004 04:44:35.744332   73894 main.go:141] libmachine: (kindnet-204413) DBG | <network>
	I1004 04:44:35.744345   73894 main.go:141] libmachine: (kindnet-204413) DBG |   <name>mk-kindnet-204413</name>
	I1004 04:44:35.744355   73894 main.go:141] libmachine: (kindnet-204413) DBG |   <dns enable='no'/>
	I1004 04:44:35.744363   73894 main.go:141] libmachine: (kindnet-204413) DBG |   
	I1004 04:44:35.744372   73894 main.go:141] libmachine: (kindnet-204413) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1004 04:44:35.744395   73894 main.go:141] libmachine: (kindnet-204413) DBG |     <dhcp>
	I1004 04:44:35.744400   73894 main.go:141] libmachine: (kindnet-204413) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1004 04:44:35.744406   73894 main.go:141] libmachine: (kindnet-204413) DBG |     </dhcp>
	I1004 04:44:35.744410   73894 main.go:141] libmachine: (kindnet-204413) DBG |   </ip>
	I1004 04:44:35.744420   73894 main.go:141] libmachine: (kindnet-204413) DBG |   
	I1004 04:44:35.744428   73894 main.go:141] libmachine: (kindnet-204413) DBG | </network>
	I1004 04:44:35.744452   73894 main.go:141] libmachine: (kindnet-204413) DBG | 
	I1004 04:44:35.750929   73894 main.go:141] libmachine: (kindnet-204413) DBG | trying to create private KVM network mk-kindnet-204413 192.168.72.0/24...
	I1004 04:44:35.829512   73894 main.go:141] libmachine: (kindnet-204413) DBG | private KVM network mk-kindnet-204413 192.168.72.0/24 created
	I1004 04:44:35.829538   73894 main.go:141] libmachine: (kindnet-204413) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kindnet-204413 ...
	I1004 04:44:35.829563   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:35.829505   74017 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:44:35.829578   73894 main.go:141] libmachine: (kindnet-204413) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 04:44:35.829645   73894 main.go:141] libmachine: (kindnet-204413) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 04:44:36.094501   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:36.094385   74017 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kindnet-204413/id_rsa...
	I1004 04:44:36.243189   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:36.243040   74017 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kindnet-204413/kindnet-204413.rawdisk...
	I1004 04:44:36.243227   73894 main.go:141] libmachine: (kindnet-204413) DBG | Writing magic tar header
	I1004 04:44:36.243244   73894 main.go:141] libmachine: (kindnet-204413) DBG | Writing SSH key tar header
	I1004 04:44:36.243255   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:36.243208   74017 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kindnet-204413 ...
	I1004 04:44:36.243458   73894 main.go:141] libmachine: (kindnet-204413) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kindnet-204413 (perms=drwx------)
	I1004 04:44:36.243528   73894 main.go:141] libmachine: (kindnet-204413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kindnet-204413
	I1004 04:44:36.243540   73894 main.go:141] libmachine: (kindnet-204413) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 04:44:36.243561   73894 main.go:141] libmachine: (kindnet-204413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 04:44:36.243587   73894 main.go:141] libmachine: (kindnet-204413) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 04:44:36.243603   73894 main.go:141] libmachine: (kindnet-204413) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 04:44:36.243615   73894 main.go:141] libmachine: (kindnet-204413) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 04:44:36.243628   73894 main.go:141] libmachine: (kindnet-204413) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 04:44:36.243635   73894 main.go:141] libmachine: (kindnet-204413) Creating domain...
	I1004 04:44:36.243649   73894 main.go:141] libmachine: (kindnet-204413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:44:36.243662   73894 main.go:141] libmachine: (kindnet-204413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 04:44:36.243679   73894 main.go:141] libmachine: (kindnet-204413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 04:44:36.243686   73894 main.go:141] libmachine: (kindnet-204413) DBG | Checking permissions on dir: /home/jenkins
	I1004 04:44:36.243696   73894 main.go:141] libmachine: (kindnet-204413) DBG | Checking permissions on dir: /home
	I1004 04:44:36.243702   73894 main.go:141] libmachine: (kindnet-204413) DBG | Skipping /home - not owner
	I1004 04:44:36.244804   73894 main.go:141] libmachine: (kindnet-204413) define libvirt domain using xml: 
	I1004 04:44:36.244818   73894 main.go:141] libmachine: (kindnet-204413) <domain type='kvm'>
	I1004 04:44:36.244825   73894 main.go:141] libmachine: (kindnet-204413)   <name>kindnet-204413</name>
	I1004 04:44:36.244829   73894 main.go:141] libmachine: (kindnet-204413)   <memory unit='MiB'>3072</memory>
	I1004 04:44:36.244834   73894 main.go:141] libmachine: (kindnet-204413)   <vcpu>2</vcpu>
	I1004 04:44:36.244838   73894 main.go:141] libmachine: (kindnet-204413)   <features>
	I1004 04:44:36.244844   73894 main.go:141] libmachine: (kindnet-204413)     <acpi/>
	I1004 04:44:36.244851   73894 main.go:141] libmachine: (kindnet-204413)     <apic/>
	I1004 04:44:36.244869   73894 main.go:141] libmachine: (kindnet-204413)     <pae/>
	I1004 04:44:36.244879   73894 main.go:141] libmachine: (kindnet-204413)     
	I1004 04:44:36.244886   73894 main.go:141] libmachine: (kindnet-204413)   </features>
	I1004 04:44:36.244892   73894 main.go:141] libmachine: (kindnet-204413)   <cpu mode='host-passthrough'>
	I1004 04:44:36.244925   73894 main.go:141] libmachine: (kindnet-204413)   
	I1004 04:44:36.244950   73894 main.go:141] libmachine: (kindnet-204413)   </cpu>
	I1004 04:44:36.244969   73894 main.go:141] libmachine: (kindnet-204413)   <os>
	I1004 04:44:36.244981   73894 main.go:141] libmachine: (kindnet-204413)     <type>hvm</type>
	I1004 04:44:36.244994   73894 main.go:141] libmachine: (kindnet-204413)     <boot dev='cdrom'/>
	I1004 04:44:36.245015   73894 main.go:141] libmachine: (kindnet-204413)     <boot dev='hd'/>
	I1004 04:44:36.245028   73894 main.go:141] libmachine: (kindnet-204413)     <bootmenu enable='no'/>
	I1004 04:44:36.245035   73894 main.go:141] libmachine: (kindnet-204413)   </os>
	I1004 04:44:36.245056   73894 main.go:141] libmachine: (kindnet-204413)   <devices>
	I1004 04:44:36.245068   73894 main.go:141] libmachine: (kindnet-204413)     <disk type='file' device='cdrom'>
	I1004 04:44:36.245086   73894 main.go:141] libmachine: (kindnet-204413)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/kindnet-204413/boot2docker.iso'/>
	I1004 04:44:36.245097   73894 main.go:141] libmachine: (kindnet-204413)       <target dev='hdc' bus='scsi'/>
	I1004 04:44:36.245107   73894 main.go:141] libmachine: (kindnet-204413)       <readonly/>
	I1004 04:44:36.245116   73894 main.go:141] libmachine: (kindnet-204413)     </disk>
	I1004 04:44:36.245130   73894 main.go:141] libmachine: (kindnet-204413)     <disk type='file' device='disk'>
	I1004 04:44:36.245143   73894 main.go:141] libmachine: (kindnet-204413)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 04:44:36.245160   73894 main.go:141] libmachine: (kindnet-204413)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/kindnet-204413/kindnet-204413.rawdisk'/>
	I1004 04:44:36.245171   73894 main.go:141] libmachine: (kindnet-204413)       <target dev='hda' bus='virtio'/>
	I1004 04:44:36.245180   73894 main.go:141] libmachine: (kindnet-204413)     </disk>
	I1004 04:44:36.245190   73894 main.go:141] libmachine: (kindnet-204413)     <interface type='network'>
	I1004 04:44:36.245203   73894 main.go:141] libmachine: (kindnet-204413)       <source network='mk-kindnet-204413'/>
	I1004 04:44:36.245214   73894 main.go:141] libmachine: (kindnet-204413)       <model type='virtio'/>
	I1004 04:44:36.245226   73894 main.go:141] libmachine: (kindnet-204413)     </interface>
	I1004 04:44:36.245242   73894 main.go:141] libmachine: (kindnet-204413)     <interface type='network'>
	I1004 04:44:36.245254   73894 main.go:141] libmachine: (kindnet-204413)       <source network='default'/>
	I1004 04:44:36.245264   73894 main.go:141] libmachine: (kindnet-204413)       <model type='virtio'/>
	I1004 04:44:36.245274   73894 main.go:141] libmachine: (kindnet-204413)     </interface>
	I1004 04:44:36.245283   73894 main.go:141] libmachine: (kindnet-204413)     <serial type='pty'>
	I1004 04:44:36.245295   73894 main.go:141] libmachine: (kindnet-204413)       <target port='0'/>
	I1004 04:44:36.245304   73894 main.go:141] libmachine: (kindnet-204413)     </serial>
	I1004 04:44:36.245315   73894 main.go:141] libmachine: (kindnet-204413)     <console type='pty'>
	I1004 04:44:36.245325   73894 main.go:141] libmachine: (kindnet-204413)       <target type='serial' port='0'/>
	I1004 04:44:36.245334   73894 main.go:141] libmachine: (kindnet-204413)     </console>
	I1004 04:44:36.245345   73894 main.go:141] libmachine: (kindnet-204413)     <rng model='virtio'>
	I1004 04:44:36.245361   73894 main.go:141] libmachine: (kindnet-204413)       <backend model='random'>/dev/random</backend>
	I1004 04:44:36.245371   73894 main.go:141] libmachine: (kindnet-204413)     </rng>
	I1004 04:44:36.245379   73894 main.go:141] libmachine: (kindnet-204413)     
	I1004 04:44:36.245388   73894 main.go:141] libmachine: (kindnet-204413)     
	I1004 04:44:36.245397   73894 main.go:141] libmachine: (kindnet-204413)   </devices>
	I1004 04:44:36.245406   73894 main.go:141] libmachine: (kindnet-204413) </domain>
	I1004 04:44:36.245417   73894 main.go:141] libmachine: (kindnet-204413) 
	I1004 04:44:36.250298   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:a5:6c:33 in network default
	I1004 04:44:36.251148   73894 main.go:141] libmachine: (kindnet-204413) Ensuring networks are active...
	I1004 04:44:36.251175   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:36.252089   73894 main.go:141] libmachine: (kindnet-204413) Ensuring network default is active
	I1004 04:44:36.252444   73894 main.go:141] libmachine: (kindnet-204413) Ensuring network mk-kindnet-204413 is active
	I1004 04:44:36.253036   73894 main.go:141] libmachine: (kindnet-204413) Getting domain xml...
	I1004 04:44:36.253754   73894 main.go:141] libmachine: (kindnet-204413) Creating domain...
	I1004 04:44:37.586705   73894 main.go:141] libmachine: (kindnet-204413) Waiting to get IP...
	I1004 04:44:37.587687   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:37.588303   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find current IP address of domain kindnet-204413 in network mk-kindnet-204413
	I1004 04:44:37.588327   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:37.588282   74017 retry.go:31] will retry after 222.446403ms: waiting for machine to come up
	I1004 04:44:37.812961   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:37.813545   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find current IP address of domain kindnet-204413 in network mk-kindnet-204413
	I1004 04:44:37.813574   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:37.813506   74017 retry.go:31] will retry after 330.295159ms: waiting for machine to come up
	I1004 04:44:38.144937   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:38.145451   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find current IP address of domain kindnet-204413 in network mk-kindnet-204413
	I1004 04:44:38.145483   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:38.145375   74017 retry.go:31] will retry after 322.546028ms: waiting for machine to come up
	I1004 04:44:38.470053   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:38.470767   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find current IP address of domain kindnet-204413 in network mk-kindnet-204413
	I1004 04:44:38.470800   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:38.470699   74017 retry.go:31] will retry after 546.496231ms: waiting for machine to come up
	I1004 04:44:38.781647   73561 crio.go:462] duration metric: took 1.497199794s to copy over tarball
	I1004 04:44:38.781715   73561 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:44:41.230421   73561 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.448652858s)
	I1004 04:44:41.230458   73561 crio.go:469] duration metric: took 2.448782409s to extract the tarball
	I1004 04:44:41.230467   73561 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:44:41.269559   73561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:44:41.316228   73561 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:44:41.316258   73561 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:44:41.316268   73561 kubeadm.go:934] updating node { 192.168.50.148 8443 v1.31.1 crio true true} ...
	I1004 04:44:41.316408   73561 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-204413 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:auto-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:44:41.316513   73561 ssh_runner.go:195] Run: crio config
	I1004 04:44:41.368417   73561 cni.go:84] Creating CNI manager for ""
	I1004 04:44:41.368450   73561 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:44:41.368475   73561 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:44:41.368508   73561 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.148 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-204413 NodeName:auto-204413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:44:41.368648   73561 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-204413"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:44:41.368711   73561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:44:41.380837   73561 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:44:41.380910   73561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:44:41.390968   73561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1004 04:44:41.409981   73561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:44:41.427478   73561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I1004 04:44:41.444823   73561 ssh_runner.go:195] Run: grep 192.168.50.148	control-plane.minikube.internal$ /etc/hosts
	I1004 04:44:41.448857   73561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:44:41.461506   73561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:44:41.595394   73561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:44:41.617327   73561 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413 for IP: 192.168.50.148
	I1004 04:44:41.617362   73561 certs.go:194] generating shared ca certs ...
	I1004 04:44:41.617386   73561 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:44:41.617589   73561 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:44:41.617644   73561 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:44:41.617657   73561 certs.go:256] generating profile certs ...
	I1004 04:44:41.617739   73561 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/client.key
	I1004 04:44:41.617776   73561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/client.crt with IP's: []
	I1004 04:44:41.741749   73561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/client.crt ...
	I1004 04:44:41.741778   73561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/client.crt: {Name:mk6b7a932cc05be8b79db7fdbcc5a7a15c436357 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:44:41.741966   73561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/client.key ...
	I1004 04:44:41.741984   73561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/client.key: {Name:mk853770d7676e687f806be803ccbdab3d0040c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:44:41.742097   73561 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/apiserver.key.a9605231
	I1004 04:44:41.742119   73561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/apiserver.crt.a9605231 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.148]
	I1004 04:44:42.241546   73561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/apiserver.crt.a9605231 ...
	I1004 04:44:42.241579   73561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/apiserver.crt.a9605231: {Name:mk837286f45fa5481f170a14c473c17ae9083770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:44:42.241774   73561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/apiserver.key.a9605231 ...
	I1004 04:44:42.241793   73561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/apiserver.key.a9605231: {Name:mkafc59d259594579f4317fb5113408b986ca664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:44:42.241895   73561 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/apiserver.crt.a9605231 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/apiserver.crt
	I1004 04:44:42.241982   73561 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/apiserver.key.a9605231 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/apiserver.key
	I1004 04:44:42.242036   73561 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/proxy-client.key
	I1004 04:44:42.242050   73561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/proxy-client.crt with IP's: []
	I1004 04:44:42.420678   73561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/proxy-client.crt ...
	I1004 04:44:42.420714   73561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/proxy-client.crt: {Name:mk88247f4d16bbd50b4442465f09262da43daf80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:44:42.420907   73561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/proxy-client.key ...
	I1004 04:44:42.420922   73561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/proxy-client.key: {Name:mk19ef8d3ad41489a1205cba1883208ee7a90979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:44:42.421119   73561 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:44:42.421167   73561 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:44:42.421182   73561 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:44:42.421216   73561 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:44:42.421251   73561 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:44:42.421281   73561 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:44:42.421335   73561 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:44:42.421896   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:44:42.471000   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:44:42.509031   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:44:42.538386   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:44:42.564560   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1004 04:44:42.665161   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:44:42.692806   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:44:42.718600   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 04:44:42.744331   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:44:42.771508   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:44:42.799418   73561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:44:42.826125   73561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:44:42.844535   73561 ssh_runner.go:195] Run: openssl version
	I1004 04:44:42.851250   73561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:44:42.863052   73561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:44:42.868689   73561 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:44:42.868743   73561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:44:42.875250   73561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:44:42.887252   73561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:44:42.899017   73561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:44:42.904114   73561 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:44:42.904185   73561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:44:42.910174   73561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:44:42.925624   73561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:44:42.938621   73561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:44:42.944408   73561 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:44:42.944472   73561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:44:42.950567   73561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:44:42.962192   73561 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:44:42.966947   73561 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 04:44:42.967023   73561 kubeadm.go:392] StartCluster: {Name:auto-204413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:auto-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:44:42.967137   73561 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:44:42.967193   73561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:44:43.008927   73561 cri.go:89] found id: ""
	I1004 04:44:43.009006   73561 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:44:43.019655   73561 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:44:43.030103   73561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:44:43.040455   73561 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:44:43.040479   73561 kubeadm.go:157] found existing configuration files:
	
	I1004 04:44:43.040521   73561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:44:43.051026   73561 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:44:43.051093   73561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:44:43.061431   73561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:44:43.071341   73561 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:44:43.071481   73561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:44:43.082751   73561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:44:43.092851   73561 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:44:43.092922   73561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:44:43.105969   73561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:44:43.115755   73561 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:44:43.115846   73561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:44:43.127698   73561 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:44:43.202350   73561 kubeadm.go:310] W1004 04:44:43.187124     848 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:44:43.203210   73561 kubeadm.go:310] W1004 04:44:43.188287     848 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:44:43.327750   73561 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:44:39.018688   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:39.019232   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find current IP address of domain kindnet-204413 in network mk-kindnet-204413
	I1004 04:44:39.019260   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:39.019186   74017 retry.go:31] will retry after 747.456096ms: waiting for machine to come up
	I1004 04:44:39.767984   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:39.768497   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find current IP address of domain kindnet-204413 in network mk-kindnet-204413
	I1004 04:44:39.768537   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:39.768436   74017 retry.go:31] will retry after 576.800787ms: waiting for machine to come up
	I1004 04:44:40.347525   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:40.347986   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find current IP address of domain kindnet-204413 in network mk-kindnet-204413
	I1004 04:44:40.348007   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:40.347942   74017 retry.go:31] will retry after 937.929894ms: waiting for machine to come up
	I1004 04:44:41.287157   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:41.287654   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find current IP address of domain kindnet-204413 in network mk-kindnet-204413
	I1004 04:44:41.287682   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:41.287618   74017 retry.go:31] will retry after 1.338303334s: waiting for machine to come up
	I1004 04:44:42.628192   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:42.628734   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find current IP address of domain kindnet-204413 in network mk-kindnet-204413
	I1004 04:44:42.628760   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:42.628694   74017 retry.go:31] will retry after 1.672107958s: waiting for machine to come up
	I1004 04:44:44.302087   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:44.302609   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find current IP address of domain kindnet-204413 in network mk-kindnet-204413
	I1004 04:44:44.302639   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:44.302560   74017 retry.go:31] will retry after 1.752954136s: waiting for machine to come up
	I1004 04:44:46.056791   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:46.057396   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find current IP address of domain kindnet-204413 in network mk-kindnet-204413
	I1004 04:44:46.057424   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:46.057355   74017 retry.go:31] will retry after 2.816498609s: waiting for machine to come up
	I1004 04:44:48.876832   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:48.877357   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find current IP address of domain kindnet-204413 in network mk-kindnet-204413
	I1004 04:44:48.877389   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:48.877326   74017 retry.go:31] will retry after 3.032534814s: waiting for machine to come up
	I1004 04:44:53.912483   73561 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 04:44:53.912558   73561 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:44:53.912691   73561 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:44:53.912851   73561 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:44:53.912959   73561 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 04:44:53.913070   73561 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:44:53.914863   73561 out.go:235]   - Generating certificates and keys ...
	I1004 04:44:53.914949   73561 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:44:53.915035   73561 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:44:53.915137   73561 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 04:44:53.915238   73561 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1004 04:44:53.915335   73561 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1004 04:44:53.915408   73561 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1004 04:44:53.915496   73561 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1004 04:44:53.915672   73561 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-204413 localhost] and IPs [192.168.50.148 127.0.0.1 ::1]
	I1004 04:44:53.915755   73561 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1004 04:44:53.915922   73561 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-204413 localhost] and IPs [192.168.50.148 127.0.0.1 ::1]
	I1004 04:44:53.916039   73561 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 04:44:53.916133   73561 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 04:44:53.916175   73561 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1004 04:44:53.916254   73561 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:44:53.916332   73561 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:44:53.916413   73561 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 04:44:53.916489   73561 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:44:53.916606   73561 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:44:53.916701   73561 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:44:53.916820   73561 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:44:53.916895   73561 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:44:53.918340   73561 out.go:235]   - Booting up control plane ...
	I1004 04:44:53.918450   73561 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:44:53.918539   73561 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:44:53.918631   73561 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:44:53.918759   73561 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:44:53.918881   73561 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:44:53.918944   73561 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:44:53.919159   73561 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 04:44:53.919306   73561 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 04:44:53.919398   73561 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001329493s
	I1004 04:44:53.919506   73561 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 04:44:53.919593   73561 kubeadm.go:310] [api-check] The API server is healthy after 5.501742357s
	I1004 04:44:53.919733   73561 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 04:44:53.919927   73561 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 04:44:53.920023   73561 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 04:44:53.920227   73561 kubeadm.go:310] [mark-control-plane] Marking the node auto-204413 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 04:44:53.920306   73561 kubeadm.go:310] [bootstrap-token] Using token: 8kt34r.a14g4cywg9dqrygr
	I1004 04:44:53.921803   73561 out.go:235]   - Configuring RBAC rules ...
	I1004 04:44:53.921928   73561 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 04:44:53.922032   73561 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 04:44:53.922228   73561 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 04:44:53.922380   73561 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 04:44:53.922527   73561 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 04:44:53.922660   73561 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 04:44:53.922810   73561 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 04:44:53.922873   73561 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 04:44:53.922936   73561 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 04:44:53.922948   73561 kubeadm.go:310] 
	I1004 04:44:53.923025   73561 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 04:44:53.923035   73561 kubeadm.go:310] 
	I1004 04:44:53.923146   73561 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 04:44:53.923156   73561 kubeadm.go:310] 
	I1004 04:44:53.923188   73561 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 04:44:53.923278   73561 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 04:44:53.923350   73561 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 04:44:53.923363   73561 kubeadm.go:310] 
	I1004 04:44:53.923439   73561 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 04:44:53.923451   73561 kubeadm.go:310] 
	I1004 04:44:53.923513   73561 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 04:44:53.923522   73561 kubeadm.go:310] 
	I1004 04:44:53.923591   73561 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 04:44:53.923713   73561 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 04:44:53.923815   73561 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 04:44:53.923829   73561 kubeadm.go:310] 
	I1004 04:44:53.923941   73561 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 04:44:53.924052   73561 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 04:44:53.924061   73561 kubeadm.go:310] 
	I1004 04:44:53.924170   73561 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8kt34r.a14g4cywg9dqrygr \
	I1004 04:44:53.924315   73561 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 \
	I1004 04:44:53.924350   73561 kubeadm.go:310] 	--control-plane 
	I1004 04:44:53.924358   73561 kubeadm.go:310] 
	I1004 04:44:53.924460   73561 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 04:44:53.924475   73561 kubeadm.go:310] 
	I1004 04:44:53.924602   73561 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8kt34r.a14g4cywg9dqrygr \
	I1004 04:44:53.924771   73561 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 
	I1004 04:44:53.924795   73561 cni.go:84] Creating CNI manager for ""
	I1004 04:44:53.924809   73561 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:44:53.926680   73561 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:44:51.912143   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:51.912653   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find current IP address of domain kindnet-204413 in network mk-kindnet-204413
	I1004 04:44:51.912683   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:51.912607   74017 retry.go:31] will retry after 3.442839822s: waiting for machine to come up
	I1004 04:44:53.928130   73561 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:44:53.946721   73561 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:44:53.974409   73561 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:44:53.974487   73561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:44:53.974511   73561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-204413 minikube.k8s.io/updated_at=2024_10_04T04_44_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=auto-204413 minikube.k8s.io/primary=true
	I1004 04:44:54.032244   73561 ops.go:34] apiserver oom_adj: -16
	I1004 04:44:54.152902   73561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:44:54.653596   73561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:44:55.153026   73561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:44:55.653200   73561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:44:56.153779   73561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:44:56.653033   73561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:44:57.153211   73561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:44:57.654000   73561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:44:58.153061   73561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:44:58.285977   73561 kubeadm.go:1113] duration metric: took 4.311552608s to wait for elevateKubeSystemPrivileges
	I1004 04:44:58.286030   73561 kubeadm.go:394] duration metric: took 15.319012221s to StartCluster
	I1004 04:44:58.286053   73561 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:44:58.286144   73561 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:44:58.288637   73561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:44:58.288963   73561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 04:44:58.288960   73561 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:44:58.288988   73561 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:44:58.289069   73561 addons.go:69] Setting storage-provisioner=true in profile "auto-204413"
	I1004 04:44:58.289187   73561 config.go:182] Loaded profile config "auto-204413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:44:58.289190   73561 addons.go:234] Setting addon storage-provisioner=true in "auto-204413"
	I1004 04:44:58.289275   73561 host.go:66] Checking if "auto-204413" exists ...
	I1004 04:44:58.289076   73561 addons.go:69] Setting default-storageclass=true in profile "auto-204413"
	I1004 04:44:58.289335   73561 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-204413"
	I1004 04:44:58.289729   73561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:44:58.289753   73561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:44:58.289772   73561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:44:58.289773   73561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:44:58.290714   73561 out.go:177] * Verifying Kubernetes components...
	I1004 04:44:58.292117   73561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:44:58.305364   73561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45947
	I1004 04:44:58.305827   73561 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:44:58.306391   73561 main.go:141] libmachine: Using API Version  1
	I1004 04:44:58.306415   73561 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:44:58.306783   73561 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:44:58.306997   73561 main.go:141] libmachine: (auto-204413) Calling .GetState
	I1004 04:44:58.310031   73561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38795
	I1004 04:44:58.310497   73561 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:44:58.311032   73561 main.go:141] libmachine: Using API Version  1
	I1004 04:44:58.311052   73561 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:44:58.311341   73561 addons.go:234] Setting addon default-storageclass=true in "auto-204413"
	I1004 04:44:58.311382   73561 host.go:66] Checking if "auto-204413" exists ...
	I1004 04:44:58.311447   73561 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:44:58.311760   73561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:44:58.311822   73561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:44:58.311963   73561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:44:58.311994   73561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:44:58.328541   73561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45289
	I1004 04:44:58.328574   73561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I1004 04:44:58.329052   73561 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:44:58.329134   73561 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:44:58.329514   73561 main.go:141] libmachine: Using API Version  1
	I1004 04:44:58.329530   73561 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:44:58.329650   73561 main.go:141] libmachine: Using API Version  1
	I1004 04:44:58.329679   73561 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:44:58.329880   73561 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:44:58.330025   73561 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:44:58.330448   73561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:44:58.330504   73561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:44:58.330793   73561 main.go:141] libmachine: (auto-204413) Calling .GetState
	I1004 04:44:58.332812   73561 main.go:141] libmachine: (auto-204413) Calling .DriverName
	I1004 04:44:58.334952   73561 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:44:58.336481   73561 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:44:58.336499   73561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:44:58.336515   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHHostname
	I1004 04:44:58.339750   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:58.340138   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:58.340162   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:58.340447   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHPort
	I1004 04:44:58.340626   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:58.340771   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHUsername
	I1004 04:44:58.340915   73561 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/auto-204413/id_rsa Username:docker}
	I1004 04:44:58.349374   73561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39085
	I1004 04:44:58.349882   73561 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:44:58.350393   73561 main.go:141] libmachine: Using API Version  1
	I1004 04:44:58.350412   73561 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:44:58.350702   73561 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:44:58.350902   73561 main.go:141] libmachine: (auto-204413) Calling .GetState
	I1004 04:44:58.352710   73561 main.go:141] libmachine: (auto-204413) Calling .DriverName
	I1004 04:44:58.352919   73561 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:44:58.352935   73561 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:44:58.352947   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHHostname
	I1004 04:44:58.355617   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:58.356052   73561 main.go:141] libmachine: (auto-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:c2:f7", ip: ""} in network mk-auto-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:44:23 +0000 UTC Type:0 Mac:52:54:00:bf:c2:f7 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:auto-204413 Clientid:01:52:54:00:bf:c2:f7}
	I1004 04:44:58.356073   73561 main.go:141] libmachine: (auto-204413) DBG | domain auto-204413 has defined IP address 192.168.50.148 and MAC address 52:54:00:bf:c2:f7 in network mk-auto-204413
	I1004 04:44:58.356327   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHPort
	I1004 04:44:58.356520   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHKeyPath
	I1004 04:44:58.356671   73561 main.go:141] libmachine: (auto-204413) Calling .GetSSHUsername
	I1004 04:44:58.356898   73561 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/auto-204413/id_rsa Username:docker}
	I1004 04:44:58.613436   73561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:44:58.613671   73561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 04:44:55.357725   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:44:55.358158   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find current IP address of domain kindnet-204413 in network mk-kindnet-204413
	I1004 04:44:55.358184   73894 main.go:141] libmachine: (kindnet-204413) DBG | I1004 04:44:55.358128   74017 retry.go:31] will retry after 5.470730477s: waiting for machine to come up
	I1004 04:44:58.695432   73561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:44:58.872640   73561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:44:59.244213   73561 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1004 04:44:59.244365   73561 main.go:141] libmachine: Making call to close driver server
	I1004 04:44:59.244421   73561 main.go:141] libmachine: (auto-204413) Calling .Close
	I1004 04:44:59.244860   73561 main.go:141] libmachine: (auto-204413) DBG | Closing plugin on server side
	I1004 04:44:59.244974   73561 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:44:59.244998   73561 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:44:59.245011   73561 main.go:141] libmachine: Making call to close driver server
	I1004 04:44:59.245020   73561 main.go:141] libmachine: (auto-204413) Calling .Close
	I1004 04:44:59.245350   73561 main.go:141] libmachine: (auto-204413) DBG | Closing plugin on server side
	I1004 04:44:59.245361   73561 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:44:59.245370   73561 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:44:59.246207   73561 node_ready.go:35] waiting up to 15m0s for node "auto-204413" to be "Ready" ...
	I1004 04:44:59.270232   73561 node_ready.go:49] node "auto-204413" has status "Ready":"True"
	I1004 04:44:59.270282   73561 node_ready.go:38] duration metric: took 24.036552ms for node "auto-204413" to be "Ready" ...
	I1004 04:44:59.270293   73561 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:44:59.282053   73561 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-62jjr" in "kube-system" namespace to be "Ready" ...
	I1004 04:44:59.290707   73561 main.go:141] libmachine: Making call to close driver server
	I1004 04:44:59.290741   73561 main.go:141] libmachine: (auto-204413) Calling .Close
	I1004 04:44:59.291105   73561 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:44:59.291112   73561 main.go:141] libmachine: (auto-204413) DBG | Closing plugin on server side
	I1004 04:44:59.291122   73561 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:44:59.756396   73561 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-204413" context rescaled to 1 replicas
	I1004 04:44:59.782057   73561 main.go:141] libmachine: Making call to close driver server
	I1004 04:44:59.782085   73561 main.go:141] libmachine: (auto-204413) Calling .Close
	I1004 04:44:59.782387   73561 main.go:141] libmachine: (auto-204413) DBG | Closing plugin on server side
	I1004 04:44:59.782442   73561 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:44:59.782453   73561 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:44:59.782465   73561 main.go:141] libmachine: Making call to close driver server
	I1004 04:44:59.782476   73561 main.go:141] libmachine: (auto-204413) Calling .Close
	I1004 04:44:59.782735   73561 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:44:59.782757   73561 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:44:59.784774   73561 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1004 04:44:59.786147   73561 addons.go:510] duration metric: took 1.497156256s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1004 04:45:01.293514   73561 pod_ready.go:103] pod "coredns-7c65d6cfc9-62jjr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:45:00.830271   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:00.830749   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has current primary IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:00.830767   73894 main.go:141] libmachine: (kindnet-204413) Found IP for machine: 192.168.72.39
	I1004 04:45:00.830776   73894 main.go:141] libmachine: (kindnet-204413) Reserving static IP address...
	I1004 04:45:00.831142   73894 main.go:141] libmachine: (kindnet-204413) DBG | unable to find host DHCP lease matching {name: "kindnet-204413", mac: "52:54:00:5c:9f:c2", ip: "192.168.72.39"} in network mk-kindnet-204413
	I1004 04:45:00.907223   73894 main.go:141] libmachine: (kindnet-204413) DBG | Getting to WaitForSSH function...
	I1004 04:45:00.907258   73894 main.go:141] libmachine: (kindnet-204413) Reserved static IP address: 192.168.72.39
	I1004 04:45:00.907272   73894 main.go:141] libmachine: (kindnet-204413) Waiting for SSH to be available...
	I1004 04:45:00.910146   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:00.910694   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:00.910720   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:00.910867   73894 main.go:141] libmachine: (kindnet-204413) DBG | Using SSH client type: external
	I1004 04:45:00.910889   73894 main.go:141] libmachine: (kindnet-204413) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kindnet-204413/id_rsa (-rw-------)
	I1004 04:45:00.910946   73894 main.go:141] libmachine: (kindnet-204413) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/kindnet-204413/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:45:00.910972   73894 main.go:141] libmachine: (kindnet-204413) DBG | About to run SSH command:
	I1004 04:45:00.910990   73894 main.go:141] libmachine: (kindnet-204413) DBG | exit 0
	I1004 04:45:01.036076   73894 main.go:141] libmachine: (kindnet-204413) DBG | SSH cmd err, output: <nil>: 
	I1004 04:45:01.036328   73894 main.go:141] libmachine: (kindnet-204413) KVM machine creation complete!
	I1004 04:45:01.036633   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetConfigRaw
	I1004 04:45:01.037196   73894 main.go:141] libmachine: (kindnet-204413) Calling .DriverName
	I1004 04:45:01.037403   73894 main.go:141] libmachine: (kindnet-204413) Calling .DriverName
	I1004 04:45:01.037566   73894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 04:45:01.037582   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetState
	I1004 04:45:01.039128   73894 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 04:45:01.039141   73894 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 04:45:01.039148   73894 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 04:45:01.039154   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHHostname
	I1004 04:45:01.041472   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.041794   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:01.041823   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.041960   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHPort
	I1004 04:45:01.042141   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:01.042297   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:01.042445   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHUsername
	I1004 04:45:01.042660   73894 main.go:141] libmachine: Using SSH client type: native
	I1004 04:45:01.042890   73894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.39 22 <nil> <nil>}
	I1004 04:45:01.042903   73894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 04:45:01.151322   73894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:45:01.151351   73894 main.go:141] libmachine: Detecting the provisioner...
	I1004 04:45:01.151362   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHHostname
	I1004 04:45:01.154177   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.154583   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:01.154608   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.154781   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHPort
	I1004 04:45:01.154959   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:01.155120   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:01.155268   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHUsername
	I1004 04:45:01.155425   73894 main.go:141] libmachine: Using SSH client type: native
	I1004 04:45:01.155632   73894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.39 22 <nil> <nil>}
	I1004 04:45:01.155644   73894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 04:45:01.267712   73894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 04:45:01.267830   73894 main.go:141] libmachine: found compatible host: buildroot
	I1004 04:45:01.267843   73894 main.go:141] libmachine: Provisioning with buildroot...
	I1004 04:45:01.267849   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetMachineName
	I1004 04:45:01.268118   73894 buildroot.go:166] provisioning hostname "kindnet-204413"
	I1004 04:45:01.268143   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetMachineName
	I1004 04:45:01.268323   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHHostname
	I1004 04:45:01.271329   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.271809   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:01.271841   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.272011   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHPort
	I1004 04:45:01.272183   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:01.272342   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:01.272471   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHUsername
	I1004 04:45:01.272635   73894 main.go:141] libmachine: Using SSH client type: native
	I1004 04:45:01.272873   73894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.39 22 <nil> <nil>}
	I1004 04:45:01.272892   73894 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-204413 && echo "kindnet-204413" | sudo tee /etc/hostname
	I1004 04:45:01.394616   73894 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-204413
	
	I1004 04:45:01.394674   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHHostname
	I1004 04:45:01.397959   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.398354   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:01.398375   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.398548   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHPort
	I1004 04:45:01.398770   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:01.398970   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:01.399099   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHUsername
	I1004 04:45:01.399295   73894 main.go:141] libmachine: Using SSH client type: native
	I1004 04:45:01.399502   73894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.39 22 <nil> <nil>}
	I1004 04:45:01.399546   73894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-204413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-204413/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-204413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:45:01.517793   73894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:45:01.517833   73894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:45:01.517858   73894 buildroot.go:174] setting up certificates
	I1004 04:45:01.517870   73894 provision.go:84] configureAuth start
	I1004 04:45:01.517882   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetMachineName
	I1004 04:45:01.518189   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetIP
	I1004 04:45:01.520909   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.521425   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:01.521452   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.521588   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHHostname
	I1004 04:45:01.523840   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.524269   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:01.524311   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.524459   73894 provision.go:143] copyHostCerts
	I1004 04:45:01.524527   73894 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:45:01.524536   73894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:45:01.524599   73894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:45:01.524701   73894 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:45:01.524715   73894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:45:01.524745   73894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:45:01.524852   73894 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:45:01.524865   73894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:45:01.524901   73894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:45:01.524962   73894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.kindnet-204413 san=[127.0.0.1 192.168.72.39 kindnet-204413 localhost minikube]
	I1004 04:45:01.737756   73894 provision.go:177] copyRemoteCerts
	I1004 04:45:01.737835   73894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:45:01.737863   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHHostname
	I1004 04:45:01.741657   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.742130   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:01.742162   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.742415   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHPort
	I1004 04:45:01.742662   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:01.742833   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHUsername
	I1004 04:45:01.743010   73894 sshutil.go:53] new ssh client: &{IP:192.168.72.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/kindnet-204413/id_rsa Username:docker}
	I1004 04:45:01.831413   73894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:45:01.857144   73894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1004 04:45:01.883324   73894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:45:01.911117   73894 provision.go:87] duration metric: took 393.233509ms to configureAuth
	I1004 04:45:01.911147   73894 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:45:01.911302   73894 config.go:182] Loaded profile config "kindnet-204413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:45:01.911368   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHHostname
	I1004 04:45:01.914675   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.914982   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:01.915011   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:01.915241   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHPort
	I1004 04:45:01.915472   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:01.915643   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:01.915810   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHUsername
	I1004 04:45:01.915988   73894 main.go:141] libmachine: Using SSH client type: native
	I1004 04:45:01.916181   73894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.39 22 <nil> <nil>}
	I1004 04:45:01.916195   73894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:45:02.145520   73894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:45:02.145549   73894 main.go:141] libmachine: Checking connection to Docker...
	I1004 04:45:02.145557   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetURL
	I1004 04:45:02.147004   73894 main.go:141] libmachine: (kindnet-204413) DBG | Using libvirt version 6000000
	I1004 04:45:02.149217   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:02.149587   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:02.149622   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:02.149847   73894 main.go:141] libmachine: Docker is up and running!
	I1004 04:45:02.149860   73894 main.go:141] libmachine: Reticulating splines...
	I1004 04:45:02.149867   73894 client.go:171] duration metric: took 26.413476978s to LocalClient.Create
	I1004 04:45:02.149890   73894 start.go:167] duration metric: took 26.413542317s to libmachine.API.Create "kindnet-204413"
	I1004 04:45:02.149903   73894 start.go:293] postStartSetup for "kindnet-204413" (driver="kvm2")
	I1004 04:45:02.149917   73894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:45:02.149940   73894 main.go:141] libmachine: (kindnet-204413) Calling .DriverName
	I1004 04:45:02.150168   73894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:45:02.150192   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHHostname
	I1004 04:45:02.152356   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:02.152684   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:02.152714   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:02.152966   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHPort
	I1004 04:45:02.153155   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:02.153306   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHUsername
	I1004 04:45:02.153464   73894 sshutil.go:53] new ssh client: &{IP:192.168.72.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/kindnet-204413/id_rsa Username:docker}
	I1004 04:45:02.240171   73894 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:45:02.245817   73894 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:45:02.245855   73894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:45:02.245947   73894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:45:02.246053   73894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:45:02.246143   73894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:45:02.257901   73894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:45:02.285828   73894 start.go:296] duration metric: took 135.911025ms for postStartSetup
	I1004 04:45:02.285889   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetConfigRaw
	I1004 04:45:02.286572   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetIP
	I1004 04:45:02.289924   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:02.290410   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:02.290447   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:02.290731   73894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/kindnet-204413/config.json ...
	I1004 04:45:02.290925   73894 start.go:128] duration metric: took 26.577615992s to createHost
	I1004 04:45:02.290943   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHHostname
	I1004 04:45:02.293753   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:02.294079   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:02.294104   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:02.294222   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHPort
	I1004 04:45:02.294383   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:02.294533   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:02.294672   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHUsername
	I1004 04:45:02.294799   73894 main.go:141] libmachine: Using SSH client type: native
	I1004 04:45:02.294942   73894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.39 22 <nil> <nil>}
	I1004 04:45:02.294951   73894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:45:02.404764   73894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728017102.390609629
	
	I1004 04:45:02.404795   73894 fix.go:216] guest clock: 1728017102.390609629
	I1004 04:45:02.404805   73894 fix.go:229] Guest: 2024-10-04 04:45:02.390609629 +0000 UTC Remote: 2024-10-04 04:45:02.29093422 +0000 UTC m=+48.358511571 (delta=99.675409ms)
	I1004 04:45:02.404871   73894 fix.go:200] guest clock delta is within tolerance: 99.675409ms
	I1004 04:45:02.404883   73894 start.go:83] releasing machines lock for "kindnet-204413", held for 26.691780102s
	I1004 04:45:02.404914   73894 main.go:141] libmachine: (kindnet-204413) Calling .DriverName
	I1004 04:45:02.405177   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetIP
	I1004 04:45:02.407893   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:02.408312   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:02.408338   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:02.408672   73894 main.go:141] libmachine: (kindnet-204413) Calling .DriverName
	I1004 04:45:02.409201   73894 main.go:141] libmachine: (kindnet-204413) Calling .DriverName
	I1004 04:45:02.409411   73894 main.go:141] libmachine: (kindnet-204413) Calling .DriverName
	I1004 04:45:02.409503   73894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:45:02.409556   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHHostname
	I1004 04:45:02.409697   73894 ssh_runner.go:195] Run: cat /version.json
	I1004 04:45:02.409722   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHHostname
	I1004 04:45:02.412802   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:02.413132   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:02.413274   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:02.413309   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:02.413658   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHPort
	I1004 04:45:02.413722   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:02.413743   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:02.413845   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:02.413959   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHPort
	I1004 04:45:02.413960   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHUsername
	I1004 04:45:02.414172   73894 sshutil.go:53] new ssh client: &{IP:192.168.72.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/kindnet-204413/id_rsa Username:docker}
	I1004 04:45:02.414190   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHKeyPath
	I1004 04:45:02.414342   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetSSHUsername
	I1004 04:45:02.414467   73894 sshutil.go:53] new ssh client: &{IP:192.168.72.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/kindnet-204413/id_rsa Username:docker}
	I1004 04:45:02.513884   73894 ssh_runner.go:195] Run: systemctl --version
	I1004 04:45:02.520672   73894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:45:02.690915   73894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:45:02.696972   73894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:45:02.697027   73894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:45:02.713326   73894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:45:02.713350   73894 start.go:495] detecting cgroup driver to use...
	I1004 04:45:02.713433   73894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:45:02.732323   73894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:45:02.747358   73894 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:45:02.747431   73894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:45:02.761779   73894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:45:02.776377   73894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:45:02.902643   73894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:45:03.073081   73894 docker.go:233] disabling docker service ...
	I1004 04:45:03.073216   73894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:45:03.088544   73894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:45:03.101804   73894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:45:03.225196   73894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:45:03.358739   73894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:45:03.377756   73894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:45:03.397456   73894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:45:03.397526   73894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:45:03.408975   73894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:45:03.409050   73894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:45:03.419860   73894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:45:03.430900   73894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:45:03.442386   73894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:45:03.454429   73894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:45:03.465708   73894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:45:03.484122   73894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:45:03.495756   73894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:45:03.506138   73894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:45:03.506204   73894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:45:03.520188   73894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:45:03.530797   73894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:45:03.657187   73894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:45:03.758093   73894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:45:03.758196   73894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:45:03.763161   73894 start.go:563] Will wait 60s for crictl version
	I1004 04:45:03.763223   73894 ssh_runner.go:195] Run: which crictl
	I1004 04:45:03.767556   73894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:45:03.810762   73894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:45:03.810857   73894 ssh_runner.go:195] Run: crio --version
	I1004 04:45:03.841371   73894 ssh_runner.go:195] Run: crio --version
	I1004 04:45:03.874515   73894 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:45:03.875811   73894 main.go:141] libmachine: (kindnet-204413) Calling .GetIP
	I1004 04:45:03.878609   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:03.878998   73894 main.go:141] libmachine: (kindnet-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:9f:c2", ip: ""} in network mk-kindnet-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:44:51 +0000 UTC Type:0 Mac:52:54:00:5c:9f:c2 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:kindnet-204413 Clientid:01:52:54:00:5c:9f:c2}
	I1004 04:45:03.879023   73894 main.go:141] libmachine: (kindnet-204413) DBG | domain kindnet-204413 has defined IP address 192.168.72.39 and MAC address 52:54:00:5c:9f:c2 in network mk-kindnet-204413
	I1004 04:45:03.879282   73894 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1004 04:45:03.883900   73894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:45:03.897227   73894 kubeadm.go:883] updating cluster {Name:kindnet-204413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:kindnet-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:45:03.897360   73894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:45:03.897404   73894 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:45:03.931537   73894 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:45:03.931622   73894 ssh_runner.go:195] Run: which lz4
	I1004 04:45:03.936458   73894 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:45:03.940911   73894 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:45:03.940946   73894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	
	
	==> CRI-O <==
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.089864307Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017106089841882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=871d84ec-1131-4561-9494-c09338b3da2f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.090593633Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c36c0526-a8ae-4496-8418-93e98996b3f1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.090665380Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c36c0526-a8ae-4496-8418-93e98996b3f1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.090853545Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee2305c441f29525be535936a0d26d917c6641ba676c0cd946dd389e33592d0e,PodSandboxId:ce87104926ac6615ffe2a06ef2da4e5300660538068e1a9b4b4e2da9c6007bab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728016125307438513,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b4ef22-068c-4d14-840e-deab91c5ab94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbdcd3a324f421f3f0dc2aff76b89a9da3e57f5627e61474562ff8414be5510,PodSandboxId:b40d8c44f59da73246613d9666638c42077bc69f9b86d11b17059cc8b2acc9b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124144804273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5tbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87deb61f-2ce4-4d45-91da-c16557b5ef75,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188076ac7a7af46670dbbdee6a82210ef511cdb240986543a3a8c664b4b0cb3f,PodSandboxId:4c42d4b4b1430d0c5826a18a6325df53f7738d59c8e79b4fb9ef91294730f56a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124135602860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p52s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
9b3cd7f-f28e-4502-a55d-7792cfa5a6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4cec58f8215a7fb666645448240662fbda789f0558c0926b20f9f0958517b9,PodSandboxId:c8ba948590195942a8e3f251cdcfe6f39a0d657c4d719591b6df20174cddc5bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728016123867478706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9czbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedff5a2-62b6-49c3-8369-9182d1c5bf7a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f368c0bb224d66a098cacaa67038f90f3d29fb0beb9bef1fee3cc4bba10ab34,PodSandboxId:04daab29e1a1fcbb05c9905daafdb7e8a4fe30f5c7714977d33aebd4a2afed05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728016112715730070,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395dacd00dc811c334e4fda7898664c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bca5274feb5a67a3e722abb2ee79d56e93ff3ff1cf0350f2dc0eadf6af6764,PodSandboxId:2a81540c23b0366c2d0063654b570487e3d57ef145c5363cd2ea803ab87301b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728016112672617127,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c206ade11d659fd6eef7ef29aa408cde,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b379c78d8a9fc71afe4dbc57caf6f54b81d515dce69290f74a4aaf7a111fab0,PodSandboxId:fb51e9c9fda9bc66053c3f42630875ded0784fabd965b6cd4f255ecd2e6f59db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728016112660334881,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e126f795bcf640ac6233faca19ff5b5e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be902a556db8d8f7a0a1a76bc3064baa5240668a920e4640fefb2198eaafa08b,PodSandboxId:94aae5132834a78902d1424736bb9dd45722628c7813d3da821ab29d14247c97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728016112627866859,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73de2741451a14dbc69d5790883240975641ebae82f1af6f9acbd6d71c52000e,PodSandboxId:25174628a7c5d2aaeae687c122ee504f625f3fe09c9af3c9057bb1728b7ec3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015823040740414,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c36c0526-a8ae-4496-8418-93e98996b3f1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.144455715Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e210701-53bb-436f-bfba-514eccdb0b31 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.144583367Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e210701-53bb-436f-bfba-514eccdb0b31 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.146297910Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=022a94af-aac6-46d0-a32e-3e4487a71eb9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.146914893Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017106146877784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=022a94af-aac6-46d0-a32e-3e4487a71eb9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.147812980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ece7a474-d016-49fd-a952-4061121438fe name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.148145151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ece7a474-d016-49fd-a952-4061121438fe name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.148511910Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee2305c441f29525be535936a0d26d917c6641ba676c0cd946dd389e33592d0e,PodSandboxId:ce87104926ac6615ffe2a06ef2da4e5300660538068e1a9b4b4e2da9c6007bab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728016125307438513,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b4ef22-068c-4d14-840e-deab91c5ab94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbdcd3a324f421f3f0dc2aff76b89a9da3e57f5627e61474562ff8414be5510,PodSandboxId:b40d8c44f59da73246613d9666638c42077bc69f9b86d11b17059cc8b2acc9b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124144804273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5tbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87deb61f-2ce4-4d45-91da-c16557b5ef75,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188076ac7a7af46670dbbdee6a82210ef511cdb240986543a3a8c664b4b0cb3f,PodSandboxId:4c42d4b4b1430d0c5826a18a6325df53f7738d59c8e79b4fb9ef91294730f56a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124135602860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p52s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
9b3cd7f-f28e-4502-a55d-7792cfa5a6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4cec58f8215a7fb666645448240662fbda789f0558c0926b20f9f0958517b9,PodSandboxId:c8ba948590195942a8e3f251cdcfe6f39a0d657c4d719591b6df20174cddc5bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728016123867478706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9czbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedff5a2-62b6-49c3-8369-9182d1c5bf7a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f368c0bb224d66a098cacaa67038f90f3d29fb0beb9bef1fee3cc4bba10ab34,PodSandboxId:04daab29e1a1fcbb05c9905daafdb7e8a4fe30f5c7714977d33aebd4a2afed05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728016112715730070,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395dacd00dc811c334e4fda7898664c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bca5274feb5a67a3e722abb2ee79d56e93ff3ff1cf0350f2dc0eadf6af6764,PodSandboxId:2a81540c23b0366c2d0063654b570487e3d57ef145c5363cd2ea803ab87301b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728016112672617127,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c206ade11d659fd6eef7ef29aa408cde,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b379c78d8a9fc71afe4dbc57caf6f54b81d515dce69290f74a4aaf7a111fab0,PodSandboxId:fb51e9c9fda9bc66053c3f42630875ded0784fabd965b6cd4f255ecd2e6f59db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728016112660334881,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e126f795bcf640ac6233faca19ff5b5e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be902a556db8d8f7a0a1a76bc3064baa5240668a920e4640fefb2198eaafa08b,PodSandboxId:94aae5132834a78902d1424736bb9dd45722628c7813d3da821ab29d14247c97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728016112627866859,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73de2741451a14dbc69d5790883240975641ebae82f1af6f9acbd6d71c52000e,PodSandboxId:25174628a7c5d2aaeae687c122ee504f625f3fe09c9af3c9057bb1728b7ec3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015823040740414,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ece7a474-d016-49fd-a952-4061121438fe name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.208617378Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6261274a-a741-416e-b243-4797ebe04881 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.208734388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6261274a-a741-416e-b243-4797ebe04881 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.210383321Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afa567ab-d08a-43b9-a2bb-5e6e97b33c17 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.210983593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017106210954294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afa567ab-d08a-43b9-a2bb-5e6e97b33c17 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.211723039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bdbbe098-3a93-4315-bd78-7326e5a7e626 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.211794205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bdbbe098-3a93-4315-bd78-7326e5a7e626 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.212009502Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee2305c441f29525be535936a0d26d917c6641ba676c0cd946dd389e33592d0e,PodSandboxId:ce87104926ac6615ffe2a06ef2da4e5300660538068e1a9b4b4e2da9c6007bab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728016125307438513,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b4ef22-068c-4d14-840e-deab91c5ab94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbdcd3a324f421f3f0dc2aff76b89a9da3e57f5627e61474562ff8414be5510,PodSandboxId:b40d8c44f59da73246613d9666638c42077bc69f9b86d11b17059cc8b2acc9b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124144804273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5tbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87deb61f-2ce4-4d45-91da-c16557b5ef75,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188076ac7a7af46670dbbdee6a82210ef511cdb240986543a3a8c664b4b0cb3f,PodSandboxId:4c42d4b4b1430d0c5826a18a6325df53f7738d59c8e79b4fb9ef91294730f56a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124135602860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p52s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
9b3cd7f-f28e-4502-a55d-7792cfa5a6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4cec58f8215a7fb666645448240662fbda789f0558c0926b20f9f0958517b9,PodSandboxId:c8ba948590195942a8e3f251cdcfe6f39a0d657c4d719591b6df20174cddc5bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728016123867478706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9czbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedff5a2-62b6-49c3-8369-9182d1c5bf7a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f368c0bb224d66a098cacaa67038f90f3d29fb0beb9bef1fee3cc4bba10ab34,PodSandboxId:04daab29e1a1fcbb05c9905daafdb7e8a4fe30f5c7714977d33aebd4a2afed05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728016112715730070,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395dacd00dc811c334e4fda7898664c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bca5274feb5a67a3e722abb2ee79d56e93ff3ff1cf0350f2dc0eadf6af6764,PodSandboxId:2a81540c23b0366c2d0063654b570487e3d57ef145c5363cd2ea803ab87301b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728016112672617127,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c206ade11d659fd6eef7ef29aa408cde,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b379c78d8a9fc71afe4dbc57caf6f54b81d515dce69290f74a4aaf7a111fab0,PodSandboxId:fb51e9c9fda9bc66053c3f42630875ded0784fabd965b6cd4f255ecd2e6f59db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728016112660334881,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e126f795bcf640ac6233faca19ff5b5e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be902a556db8d8f7a0a1a76bc3064baa5240668a920e4640fefb2198eaafa08b,PodSandboxId:94aae5132834a78902d1424736bb9dd45722628c7813d3da821ab29d14247c97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728016112627866859,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73de2741451a14dbc69d5790883240975641ebae82f1af6f9acbd6d71c52000e,PodSandboxId:25174628a7c5d2aaeae687c122ee504f625f3fe09c9af3c9057bb1728b7ec3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015823040740414,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bdbbe098-3a93-4315-bd78-7326e5a7e626 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.252554315Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b00cacc-ee92-48da-b1d7-5a453bc47b10 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.252645711Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b00cacc-ee92-48da-b1d7-5a453bc47b10 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.253934497Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6fb40257-f75a-49d7-ac19-cd43698e0a8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.254632233Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017106254604761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fb40257-f75a-49d7-ac19-cd43698e0a8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.255494388Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=434e274b-ecd2-4fcb-baef-f96cfe36fb72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.255594436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=434e274b-ecd2-4fcb-baef-f96cfe36fb72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:45:06 embed-certs-934812 crio[706]: time="2024-10-04 04:45:06.255822725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee2305c441f29525be535936a0d26d917c6641ba676c0cd946dd389e33592d0e,PodSandboxId:ce87104926ac6615ffe2a06ef2da4e5300660538068e1a9b4b4e2da9c6007bab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728016125307438513,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b4ef22-068c-4d14-840e-deab91c5ab94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbdcd3a324f421f3f0dc2aff76b89a9da3e57f5627e61474562ff8414be5510,PodSandboxId:b40d8c44f59da73246613d9666638c42077bc69f9b86d11b17059cc8b2acc9b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124144804273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5tbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87deb61f-2ce4-4d45-91da-c16557b5ef75,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:188076ac7a7af46670dbbdee6a82210ef511cdb240986543a3a8c664b4b0cb3f,PodSandboxId:4c42d4b4b1430d0c5826a18a6325df53f7738d59c8e79b4fb9ef91294730f56a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728016124135602860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p52s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
9b3cd7f-f28e-4502-a55d-7792cfa5a6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4cec58f8215a7fb666645448240662fbda789f0558c0926b20f9f0958517b9,PodSandboxId:c8ba948590195942a8e3f251cdcfe6f39a0d657c4d719591b6df20174cddc5bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728016123867478706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9czbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dedff5a2-62b6-49c3-8369-9182d1c5bf7a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f368c0bb224d66a098cacaa67038f90f3d29fb0beb9bef1fee3cc4bba10ab34,PodSandboxId:04daab29e1a1fcbb05c9905daafdb7e8a4fe30f5c7714977d33aebd4a2afed05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728016112715730070,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395dacd00dc811c334e4fda7898664c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bca5274feb5a67a3e722abb2ee79d56e93ff3ff1cf0350f2dc0eadf6af6764,PodSandboxId:2a81540c23b0366c2d0063654b570487e3d57ef145c5363cd2ea803ab87301b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728016112672617127,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c206ade11d659fd6eef7ef29aa408cde,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b379c78d8a9fc71afe4dbc57caf6f54b81d515dce69290f74a4aaf7a111fab0,PodSandboxId:fb51e9c9fda9bc66053c3f42630875ded0784fabd965b6cd4f255ecd2e6f59db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728016112660334881,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e126f795bcf640ac6233faca19ff5b5e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be902a556db8d8f7a0a1a76bc3064baa5240668a920e4640fefb2198eaafa08b,PodSandboxId:94aae5132834a78902d1424736bb9dd45722628c7813d3da821ab29d14247c97,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728016112627866859,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73de2741451a14dbc69d5790883240975641ebae82f1af6f9acbd6d71c52000e,PodSandboxId:25174628a7c5d2aaeae687c122ee504f625f3fe09c9af3c9057bb1728b7ec3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728015823040740414,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-934812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb498a93de1326f90b260031f2ed41b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=434e274b-ecd2-4fcb-baef-f96cfe36fb72 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ee2305c441f29       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   ce87104926ac6       storage-provisioner
	3cbdcd3a324f4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   b40d8c44f59da       coredns-7c65d6cfc9-h5tbr
	188076ac7a7af       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   4c42d4b4b1430       coredns-7c65d6cfc9-p52s6
	ae4cec58f8215       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   c8ba948590195       kube-proxy-9czbc
	3f368c0bb224d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   04daab29e1a1f       etcd-embed-certs-934812
	25bca5274feb5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   2a81540c23b03       kube-scheduler-embed-certs-934812
	7b379c78d8a9f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   fb51e9c9fda9b       kube-controller-manager-embed-certs-934812
	be902a556db8d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   94aae5132834a       kube-apiserver-embed-certs-934812
	73de2741451a1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   25174628a7c5d       kube-apiserver-embed-certs-934812
	
	
	==> coredns [188076ac7a7af46670dbbdee6a82210ef511cdb240986543a3a8c664b4b0cb3f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [3cbdcd3a324f421f3f0dc2aff76b89a9da3e57f5627e61474562ff8414be5510] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-934812
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-934812
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=embed-certs-934812
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T04_28_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 04:28:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-934812
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 04:44:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 04:44:05 +0000   Fri, 04 Oct 2024 04:28:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 04:44:05 +0000   Fri, 04 Oct 2024 04:28:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 04:44:05 +0000   Fri, 04 Oct 2024 04:28:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 04:44:05 +0000   Fri, 04 Oct 2024 04:28:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.74
	  Hostname:    embed-certs-934812
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 360498a1f1444edcb55e87f15c79d8ba
	  System UUID:                360498a1-f144-4edc-b55e-87f15c79d8ba
	  Boot ID:                    401fba8b-79f6-4889-8e22-9516f8ae8624
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-h5tbr                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-p52s6                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-934812                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-934812             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-934812    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-9czbc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-934812             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-fh2lk               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-934812 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-934812 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-934812 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-934812 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-934812 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-934812 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-934812 event: Registered Node embed-certs-934812 in Controller
	
	
	==> dmesg <==
	[  +0.051138] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040154] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.813636] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.507720] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.394041] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.011587] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.055747] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061478] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.183766] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.156398] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.303105] systemd-fstab-generator[697]: Ignoring "noauto" option for root device
	[  +4.328839] systemd-fstab-generator[787]: Ignoring "noauto" option for root device
	[  +0.063432] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.733439] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +5.631506] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.320677] kauditd_printk_skb: 85 callbacks suppressed
	[Oct 4 04:28] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.173021] systemd-fstab-generator[2586]: Ignoring "noauto" option for root device
	[  +4.574723] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.542898] systemd-fstab-generator[2906]: Ignoring "noauto" option for root device
	[  +5.851208] systemd-fstab-generator[3042]: Ignoring "noauto" option for root device
	[  +0.047545] kauditd_printk_skb: 14 callbacks suppressed
	[Oct 4 04:29] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [3f368c0bb224d66a098cacaa67038f90f3d29fb0beb9bef1fee3cc4bba10ab34] <==
	{"level":"info","ts":"2024-10-04T04:28:33.604587Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:28:33.604631Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:28:33.605098Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T04:28:33.605127Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T04:28:33.605283Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:28:33.607821Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:28:33.610981Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T04:28:33.614402Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:28:33.624727Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.74:2379"}
	{"level":"info","ts":"2024-10-04T04:28:33.614978Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6b8a659e8e86db88","local-member-id":"6a7e013021d70f0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:28:33.640809Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:28:33.653402Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:38:33.685704Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":723}
	{"level":"info","ts":"2024-10-04T04:38:33.694594Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":723,"took":"8.25364ms","hash":1720095917,"current-db-size-bytes":2293760,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2293760,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-10-04T04:38:33.694676Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1720095917,"revision":723,"compact-revision":-1}
	{"level":"info","ts":"2024-10-04T04:43:33.692491Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":966}
	{"level":"info","ts":"2024-10-04T04:43:33.696489Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":966,"took":"3.446802ms","hash":1265030216,"current-db-size-bytes":2293760,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-04T04:43:33.696569Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1265030216,"revision":966,"compact-revision":723}
	{"level":"info","ts":"2024-10-04T04:44:43.249410Z","caller":"traceutil/trace.go:171","msg":"trace[749144057] transaction","detail":"{read_only:false; response_revision:1268; number_of_response:1; }","duration":"319.759859ms","start":"2024-10-04T04:44:42.929559Z","end":"2024-10-04T04:44:43.249319Z","steps":["trace[749144057] 'process raft request'  (duration: 319.215807ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T04:44:43.258467Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T04:44:42.929543Z","time spent":"320.923417ms","remote":"127.0.0.1:50874","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4372,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-6867b74b74-fh2lk\" mod_revision:1029 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-fh2lk\" value_size:4306 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-6867b74b74-fh2lk\" > >"}
	{"level":"info","ts":"2024-10-04T04:44:43.769289Z","caller":"traceutil/trace.go:171","msg":"trace[664671747] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"122.850556ms","start":"2024-10-04T04:44:43.646418Z","end":"2024-10-04T04:44:43.769269Z","steps":["trace[664671747] 'process raft request'  (duration: 122.538468ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T04:45:07.309073Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.559955ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T04:45:07.309167Z","caller":"traceutil/trace.go:171","msg":"trace[886429433] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1289; }","duration":"107.676649ms","start":"2024-10-04T04:45:07.201477Z","end":"2024-10-04T04:45:07.309154Z","steps":["trace[886429433] 'range keys from in-memory index tree'  (duration: 107.50005ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T04:45:07.309460Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.024106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T04:45:07.309516Z","caller":"traceutil/trace.go:171","msg":"trace[118826666] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1289; }","duration":"123.081199ms","start":"2024-10-04T04:45:07.186420Z","end":"2024-10-04T04:45:07.309502Z","steps":["trace[118826666] 'range keys from in-memory index tree'  (duration: 122.96294ms)"],"step_count":1}
	
	
	==> kernel <==
	 04:45:07 up 21 min,  0 users,  load average: 0.09, 0.10, 0.10
	Linux embed-certs-934812 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [73de2741451a14dbc69d5790883240975641ebae82f1af6f9acbd6d71c52000e] <==
	W1004 04:28:28.975793       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.020126       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.059908       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.068473       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.093322       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.153978       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.172829       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.228172       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.229651       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.423801       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.457772       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.594699       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.633632       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.633659       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.700101       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.712039       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.735643       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.788383       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.797922       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.819883       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.849307       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:29.947886       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:30.075118       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:30.078694       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1004 04:28:30.102089       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [be902a556db8d8f7a0a1a76bc3064baa5240668a920e4640fefb2198eaafa08b] <==
	I1004 04:41:36.227767       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:41:36.228848       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 04:43:35.227041       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:43:35.227247       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1004 04:43:36.228608       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:43:36.228720       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1004 04:43:36.228653       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:43:36.228842       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1004 04:43:36.229953       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:43:36.229998       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 04:44:36.231178       1 handler_proxy.go:99] no RequestInfo found in the context
	W1004 04:44:36.231337       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:44:36.231561       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1004 04:44:36.231430       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1004 04:44:36.232752       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:44:36.232796       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7b379c78d8a9fc71afe4dbc57caf6f54b81d515dce69290f74a4aaf7a111fab0] <==
	I1004 04:39:42.778062       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 04:39:46.934048       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="72.115µs"
	E1004 04:40:12.230574       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:40:12.785697       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:40:42.238112       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:40:42.795081       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:41:12.244375       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:41:12.802922       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:41:42.250495       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:41:42.809869       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:42:12.258469       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:42:12.819603       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:42:42.266351       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:42:42.826716       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:43:12.273166       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:43:12.834366       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:43:42.279472       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:43:42.841768       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 04:44:05.924869       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-934812"
	E1004 04:44:12.286847       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:44:12.851559       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:44:42.295169       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:44:42.861970       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 04:44:43.255729       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="346.841µs"
	I1004 04:44:55.936743       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="149.326µs"
	
	
	==> kube-proxy [ae4cec58f8215a7fb666645448240662fbda789f0558c0926b20f9f0958517b9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 04:28:44.651309       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 04:28:44.668251       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.74"]
	E1004 04:28:44.668320       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 04:28:44.835400       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 04:28:44.839884       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 04:28:44.839910       1 server_linux.go:169] "Using iptables Proxier"
	I1004 04:28:44.864586       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 04:28:44.867565       1 server.go:483] "Version info" version="v1.31.1"
	I1004 04:28:44.867587       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:28:44.875410       1 config.go:199] "Starting service config controller"
	I1004 04:28:44.875456       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 04:28:44.875488       1 config.go:105] "Starting endpoint slice config controller"
	I1004 04:28:44.875492       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 04:28:44.875998       1 config.go:328] "Starting node config controller"
	I1004 04:28:44.876005       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 04:28:44.980313       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 04:28:45.001871       1 shared_informer.go:320] Caches are synced for service config
	I1004 04:28:45.008468       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [25bca5274feb5a67a3e722abb2ee79d56e93ff3ff1cf0350f2dc0eadf6af6764] <==
	W1004 04:28:36.071459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 04:28:36.071501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.105401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 04:28:36.105516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.140621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 04:28:36.140691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.165999       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 04:28:36.166095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.199374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 04:28:36.199501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.269460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 04:28:36.269655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.347395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 04:28:36.347448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.441774       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 04:28:36.441825       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1004 04:28:36.480988       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 04:28:36.481043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.500958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 04:28:36.501020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.552930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 04:28:36.553613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 04:28:36.580411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 04:28:36.580464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 04:28:38.325842       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 04:44:17 embed-certs-934812 kubelet[2913]: E1004 04:44:17.921162    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fh2lk" podUID="12e3e884-2ad3-4eaa-a505-822717e5bc8c"
	Oct 04 04:44:18 embed-certs-934812 kubelet[2913]: E1004 04:44:18.166951    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017058166597928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:44:18 embed-certs-934812 kubelet[2913]: E1004 04:44:18.166999    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017058166597928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:44:28 embed-certs-934812 kubelet[2913]: E1004 04:44:28.168865    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017068168298195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:44:28 embed-certs-934812 kubelet[2913]: E1004 04:44:28.169268    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017068168298195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:44:31 embed-certs-934812 kubelet[2913]: E1004 04:44:31.936417    2913 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 04 04:44:31 embed-certs-934812 kubelet[2913]: E1004 04:44:31.936518    2913 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 04 04:44:31 embed-certs-934812 kubelet[2913]: E1004 04:44:31.936750    2913 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmtwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-fh2lk_kube-system(12e3e884-2ad3-4eaa-a505-822717e5bc8c): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 04 04:44:31 embed-certs-934812 kubelet[2913]: E1004 04:44:31.938157    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-fh2lk" podUID="12e3e884-2ad3-4eaa-a505-822717e5bc8c"
	Oct 04 04:44:37 embed-certs-934812 kubelet[2913]: E1004 04:44:37.967284    2913 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 04:44:37 embed-certs-934812 kubelet[2913]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 04:44:37 embed-certs-934812 kubelet[2913]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 04:44:37 embed-certs-934812 kubelet[2913]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 04:44:37 embed-certs-934812 kubelet[2913]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 04:44:38 embed-certs-934812 kubelet[2913]: E1004 04:44:38.171850    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017078170928802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:44:38 embed-certs-934812 kubelet[2913]: E1004 04:44:38.171901    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017078170928802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:44:42 embed-certs-934812 kubelet[2913]: E1004 04:44:42.920270    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fh2lk" podUID="12e3e884-2ad3-4eaa-a505-822717e5bc8c"
	Oct 04 04:44:48 embed-certs-934812 kubelet[2913]: E1004 04:44:48.178761    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017088173678988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:44:48 embed-certs-934812 kubelet[2913]: E1004 04:44:48.178821    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017088173678988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:44:55 embed-certs-934812 kubelet[2913]: E1004 04:44:55.920076    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fh2lk" podUID="12e3e884-2ad3-4eaa-a505-822717e5bc8c"
	Oct 04 04:44:58 embed-certs-934812 kubelet[2913]: E1004 04:44:58.179948    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017098179632009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:44:58 embed-certs-934812 kubelet[2913]: E1004 04:44:58.180011    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017098179632009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:45:06 embed-certs-934812 kubelet[2913]: E1004 04:45:06.920546    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fh2lk" podUID="12e3e884-2ad3-4eaa-a505-822717e5bc8c"
	Oct 04 04:45:08 embed-certs-934812 kubelet[2913]: E1004 04:45:08.185890    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017108185118858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:45:08 embed-certs-934812 kubelet[2913]: E1004 04:45:08.185917    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017108185118858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [ee2305c441f29525be535936a0d26d917c6641ba676c0cd946dd389e33592d0e] <==
	I1004 04:28:45.430117       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 04:28:45.445326       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 04:28:45.445390       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 04:28:45.458072       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 04:28:45.458880       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-934812_a13376cf-89b1-44f7-9229-91123f906dfe!
	I1004 04:28:45.465338       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5655821d-afa4-442d-a23b-224ce4c930c8", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-934812_a13376cf-89b1-44f7-9229-91123f906dfe became leader
	I1004 04:28:45.559436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-934812_a13376cf-89b1-44f7-9229-91123f906dfe!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-934812 -n embed-certs-934812
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-934812 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-fh2lk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-934812 describe pod metrics-server-6867b74b74-fh2lk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-934812 describe pod metrics-server-6867b74b74-fh2lk: exit status 1 (63.702582ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-fh2lk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-934812 describe pod metrics-server-6867b74b74-fh2lk: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (433.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (543.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-04 04:47:01.195316168 +0000 UTC m=+7140.128256728
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-281471 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-281471 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (77.845304ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-281471 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-281471 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-281471 logs -n 25: (1.8854136s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-204413 sudo                               | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-204413 sudo                               | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-204413 sudo cat                           | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-204413 sudo cat                           | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-204413 sudo                               | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-204413 sudo                               | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-204413 sudo                               | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-204413 sudo cat                           | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-204413 sudo cat                           | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-204413 sudo                               | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-204413 sudo                               | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-204413 sudo                               | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-204413 sudo find                          | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-204413 sudo crio                          | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-204413                                    | kindnet-204413            | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	| start   | -p enable-default-cni-204413                         | enable-default-cni-204413 | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p calico-204413 pgrep -a                            | calico-204413             | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	| ssh     | -p calico-204413 sudo cat                            | calico-204413             | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p calico-204413 sudo cat                            | calico-204413             | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p calico-204413 sudo cat                            | calico-204413             | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	| ssh     | -p calico-204413 sudo crictl                         | calico-204413             | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p calico-204413 sudo crictl                         | calico-204413             | jenkins | v1.34.0 | 04 Oct 24 04:46 UTC | 04 Oct 24 04:46 UTC |
	|         | ps --all                                             |                           |         |         |                     |                     |
	| ssh     | -p calico-204413 sudo find                           | calico-204413             | jenkins | v1.34.0 | 04 Oct 24 04:47 UTC | 04 Oct 24 04:47 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p calico-204413 sudo ip a s                         | calico-204413             | jenkins | v1.34.0 | 04 Oct 24 04:47 UTC | 04 Oct 24 04:47 UTC |
	| ssh     | -p calico-204413 sudo ip r s                         | calico-204413             | jenkins | v1.34.0 | 04 Oct 24 04:47 UTC | 04 Oct 24 04:47 UTC |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 04:46:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 04:46:15.484297   77883 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:46:15.484405   77883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:46:15.484417   77883 out.go:358] Setting ErrFile to fd 2...
	I1004 04:46:15.484425   77883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:46:15.484619   77883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:46:15.485278   77883 out.go:352] Setting JSON to false
	I1004 04:46:15.486520   77883 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8920,"bootTime":1728008255,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:46:15.486586   77883 start.go:139] virtualization: kvm guest
	I1004 04:46:15.489056   77883 out.go:177] * [enable-default-cni-204413] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:46:15.490359   77883 notify.go:220] Checking for updates...
	I1004 04:46:15.490376   77883 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:46:15.491743   77883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:46:15.493117   77883 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:46:15.494385   77883 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:46:15.495565   77883 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:46:15.496647   77883 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:46:15.498023   77883 config.go:182] Loaded profile config "calico-204413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:46:15.498128   77883 config.go:182] Loaded profile config "custom-flannel-204413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:46:15.498236   77883 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:46:15.498324   77883 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:46:15.534694   77883 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 04:46:15.535922   77883 start.go:297] selected driver: kvm2
	I1004 04:46:15.535950   77883 start.go:901] validating driver "kvm2" against <nil>
	I1004 04:46:15.535970   77883 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:46:15.537016   77883 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:46:15.537121   77883 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:46:15.553371   77883 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:46:15.553414   77883 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E1004 04:46:15.553699   77883 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1004 04:46:15.553723   77883 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:46:15.553744   77883 cni.go:84] Creating CNI manager for "bridge"
	I1004 04:46:15.553752   77883 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 04:46:15.553795   77883 start.go:340] cluster config:
	{Name:enable-default-cni-204413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:46:15.553894   77883 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:46:15.555421   77883 out.go:177] * Starting "enable-default-cni-204413" primary control-plane node in "enable-default-cni-204413" cluster
	I1004 04:46:10.725805   74599 node_ready.go:49] node "calico-204413" has status "Ready":"True"
	I1004 04:46:10.725839   74599 node_ready.go:38] duration metric: took 9.004925452s for node "calico-204413" to be "Ready" ...
	I1004 04:46:10.725851   74599 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:46:10.734436   74599 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-b8d8894fb-hqgwj" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:12.741897   74599 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-hqgwj" in "kube-system" namespace has status "Ready":"False"
	I1004 04:46:14.742159   74599 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-hqgwj" in "kube-system" namespace has status "Ready":"False"
	I1004 04:46:11.527429   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:11.528169   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | unable to find current IP address of domain custom-flannel-204413 in network mk-custom-flannel-204413
	I1004 04:46:11.528199   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | I1004 04:46:11.528130   76467 retry.go:31] will retry after 2.146114152s: waiting for machine to come up
	I1004 04:46:13.676979   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:13.677509   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | unable to find current IP address of domain custom-flannel-204413 in network mk-custom-flannel-204413
	I1004 04:46:13.677546   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | I1004 04:46:13.677457   76467 retry.go:31] will retry after 3.507449697s: waiting for machine to come up
	I1004 04:46:15.556396   77883 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:46:15.556425   77883 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 04:46:15.556431   77883 cache.go:56] Caching tarball of preloaded images
	I1004 04:46:15.556511   77883 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:46:15.556522   77883 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 04:46:15.556639   77883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/config.json ...
	I1004 04:46:15.556659   77883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/config.json: {Name:mkd1bd204a7aafb236def2897a176b8fdec59c2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:15.556783   77883 start.go:360] acquireMachinesLock for enable-default-cni-204413: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:46:16.831194   74599 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-hqgwj" in "kube-system" namespace has status "Ready":"False"
	I1004 04:46:19.249118   74599 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-hqgwj" in "kube-system" namespace has status "Ready":"False"
	I1004 04:46:17.186203   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:17.186666   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | unable to find current IP address of domain custom-flannel-204413 in network mk-custom-flannel-204413
	I1004 04:46:17.186686   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | I1004 04:46:17.186626   76467 retry.go:31] will retry after 2.889190597s: waiting for machine to come up
	I1004 04:46:20.077200   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:20.077820   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | unable to find current IP address of domain custom-flannel-204413 in network mk-custom-flannel-204413
	I1004 04:46:20.077844   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | I1004 04:46:20.077783   76467 retry.go:31] will retry after 5.209998712s: waiting for machine to come up
	I1004 04:46:21.265299   74599 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-hqgwj" in "kube-system" namespace has status "Ready":"False"
	I1004 04:46:23.740833   74599 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-hqgwj" in "kube-system" namespace has status "Ready":"False"
	I1004 04:46:25.289400   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:25.289869   76445 main.go:141] libmachine: (custom-flannel-204413) Found IP for machine: 192.168.50.4
	I1004 04:46:25.289891   76445 main.go:141] libmachine: (custom-flannel-204413) Reserving static IP address...
	I1004 04:46:25.289904   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has current primary IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:25.290309   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | unable to find host DHCP lease matching {name: "custom-flannel-204413", mac: "52:54:00:57:49:56", ip: "192.168.50.4"} in network mk-custom-flannel-204413
	I1004 04:46:25.366828   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | Getting to WaitForSSH function...
	I1004 04:46:25.366859   76445 main.go:141] libmachine: (custom-flannel-204413) Reserved static IP address: 192.168.50.4
	I1004 04:46:25.366871   76445 main.go:141] libmachine: (custom-flannel-204413) Waiting for SSH to be available...
	I1004 04:46:25.369647   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:25.370174   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:minikube Clientid:01:52:54:00:57:49:56}
	I1004 04:46:25.370204   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:25.370337   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | Using SSH client type: external
	I1004 04:46:25.370363   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/custom-flannel-204413/id_rsa (-rw-------)
	I1004 04:46:25.370395   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/custom-flannel-204413/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:46:25.370408   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | About to run SSH command:
	I1004 04:46:25.370427   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | exit 0
	I1004 04:46:25.507973   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | SSH cmd err, output: <nil>: 
	I1004 04:46:25.508259   76445 main.go:141] libmachine: (custom-flannel-204413) KVM machine creation complete!
	I1004 04:46:25.508591   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetConfigRaw
	I1004 04:46:25.509321   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .DriverName
	I1004 04:46:25.509547   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .DriverName
	I1004 04:46:25.509686   76445 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 04:46:25.509701   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetState
	I1004 04:46:25.510749   76445 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 04:46:25.510764   76445 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 04:46:25.510771   76445 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 04:46:25.510777   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHHostname
	I1004 04:46:25.513370   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:25.513762   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:25.513800   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:25.513965   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHPort
	I1004 04:46:25.514164   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:25.514325   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:25.514474   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHUsername
	I1004 04:46:25.514622   76445 main.go:141] libmachine: Using SSH client type: native
	I1004 04:46:25.514820   76445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1004 04:46:25.514833   76445 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 04:46:25.623462   76445 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:46:25.623495   76445 main.go:141] libmachine: Detecting the provisioner...
	I1004 04:46:25.623508   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHHostname
	I1004 04:46:25.626827   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:25.627251   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:25.627277   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:25.627447   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHPort
	I1004 04:46:25.627637   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:25.627835   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:25.628006   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHUsername
	I1004 04:46:25.628191   76445 main.go:141] libmachine: Using SSH client type: native
	I1004 04:46:25.628382   76445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1004 04:46:25.628395   76445 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 04:46:25.741027   76445 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 04:46:25.741085   76445 main.go:141] libmachine: found compatible host: buildroot
	I1004 04:46:25.741097   76445 main.go:141] libmachine: Provisioning with buildroot...
	I1004 04:46:25.741108   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetMachineName
	I1004 04:46:25.741346   76445 buildroot.go:166] provisioning hostname "custom-flannel-204413"
	I1004 04:46:25.741386   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetMachineName
	I1004 04:46:25.741617   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHHostname
	I1004 04:46:25.744836   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:25.745253   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:25.745279   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:25.745480   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHPort
	I1004 04:46:25.745654   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:25.745818   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:25.745976   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHUsername
	I1004 04:46:25.746149   76445 main.go:141] libmachine: Using SSH client type: native
	I1004 04:46:25.746353   76445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1004 04:46:25.746373   76445 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-204413 && echo "custom-flannel-204413" | sudo tee /etc/hostname
	I1004 04:46:25.882585   76445 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-204413
	
	I1004 04:46:25.882618   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHHostname
	I1004 04:46:25.885623   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:25.885966   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:25.885994   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:25.886201   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHPort
	I1004 04:46:25.886406   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:25.886585   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:25.886729   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHUsername
	I1004 04:46:25.886887   76445 main.go:141] libmachine: Using SSH client type: native
	I1004 04:46:25.887129   76445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1004 04:46:25.887155   76445 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-204413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-204413/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-204413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:46:26.009356   76445 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:46:26.009379   76445 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:46:26.009395   76445 buildroot.go:174] setting up certificates
	I1004 04:46:26.009407   76445 provision.go:84] configureAuth start
	I1004 04:46:26.009420   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetMachineName
	I1004 04:46:26.009697   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetIP
	I1004 04:46:26.012563   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.013044   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:26.013078   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.013278   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHHostname
	I1004 04:46:26.015718   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.015986   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:26.016013   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.016157   76445 provision.go:143] copyHostCerts
	I1004 04:46:26.016229   76445 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:46:26.016246   76445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:46:26.016331   76445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:46:26.016447   76445 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:46:26.016458   76445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:46:26.016497   76445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:46:26.016629   76445 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:46:26.016641   76445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:46:26.016678   76445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:46:26.016760   76445 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-204413 san=[127.0.0.1 192.168.50.4 custom-flannel-204413 localhost minikube]
	I1004 04:46:26.812829   77883 start.go:364] duration metric: took 11.256007735s to acquireMachinesLock for "enable-default-cni-204413"
	I1004 04:46:26.812885   77883 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-204413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:46:26.813067   77883 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 04:46:26.140557   76445 provision.go:177] copyRemoteCerts
	I1004 04:46:26.140623   76445 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:46:26.140649   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHHostname
	I1004 04:46:26.143368   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.143761   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:26.143801   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.143994   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHPort
	I1004 04:46:26.144147   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:26.144272   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHUsername
	I1004 04:46:26.144388   76445 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/custom-flannel-204413/id_rsa Username:docker}
	I1004 04:46:26.230652   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:46:26.255489   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1004 04:46:26.280124   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 04:46:26.303211   76445 provision.go:87] duration metric: took 293.793267ms to configureAuth
	I1004 04:46:26.303236   76445 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:46:26.303423   76445 config.go:182] Loaded profile config "custom-flannel-204413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:46:26.303528   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHHostname
	I1004 04:46:26.306337   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.306634   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:26.306667   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.306832   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHPort
	I1004 04:46:26.307003   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:26.307180   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:26.307294   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHUsername
	I1004 04:46:26.307489   76445 main.go:141] libmachine: Using SSH client type: native
	I1004 04:46:26.307695   76445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1004 04:46:26.307712   76445 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:46:26.554706   76445 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:46:26.554730   76445 main.go:141] libmachine: Checking connection to Docker...
	I1004 04:46:26.554740   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetURL
	I1004 04:46:26.556147   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | Using libvirt version 6000000
	I1004 04:46:26.558599   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.558972   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:26.559000   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.559191   76445 main.go:141] libmachine: Docker is up and running!
	I1004 04:46:26.559207   76445 main.go:141] libmachine: Reticulating splines...
	I1004 04:46:26.559215   76445 client.go:171] duration metric: took 25.373065948s to LocalClient.Create
	I1004 04:46:26.559241   76445 start.go:167] duration metric: took 25.373182459s to libmachine.API.Create "custom-flannel-204413"
	I1004 04:46:26.559253   76445 start.go:293] postStartSetup for "custom-flannel-204413" (driver="kvm2")
	I1004 04:46:26.559264   76445 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:46:26.559302   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .DriverName
	I1004 04:46:26.559556   76445 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:46:26.559583   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHHostname
	I1004 04:46:26.561907   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.562332   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:26.562356   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.562492   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHPort
	I1004 04:46:26.562650   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:26.562799   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHUsername
	I1004 04:46:26.562903   76445 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/custom-flannel-204413/id_rsa Username:docker}
	I1004 04:46:26.650884   76445 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:46:26.655265   76445 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:46:26.655284   76445 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:46:26.655343   76445 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:46:26.655442   76445 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:46:26.655557   76445 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:46:26.665330   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:46:26.691099   76445 start.go:296] duration metric: took 131.833232ms for postStartSetup
	I1004 04:46:26.691151   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetConfigRaw
	I1004 04:46:26.691844   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetIP
	I1004 04:46:26.694955   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.695346   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:26.695375   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.695704   76445 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/config.json ...
	I1004 04:46:26.695945   76445 start.go:128] duration metric: took 25.528802317s to createHost
	I1004 04:46:26.695976   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHHostname
	I1004 04:46:26.698369   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.698778   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:26.698806   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.699016   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHPort
	I1004 04:46:26.699195   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:26.699436   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:26.699620   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHUsername
	I1004 04:46:26.699817   76445 main.go:141] libmachine: Using SSH client type: native
	I1004 04:46:26.700070   76445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1004 04:46:26.700086   76445 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:46:26.812683   76445 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728017186.785657693
	
	I1004 04:46:26.812706   76445 fix.go:216] guest clock: 1728017186.785657693
	I1004 04:46:26.812715   76445 fix.go:229] Guest: 2024-10-04 04:46:26.785657693 +0000 UTC Remote: 2024-10-04 04:46:26.695961014 +0000 UTC m=+25.648022359 (delta=89.696679ms)
	I1004 04:46:26.812737   76445 fix.go:200] guest clock delta is within tolerance: 89.696679ms
	I1004 04:46:26.812741   76445 start.go:83] releasing machines lock for "custom-flannel-204413", held for 25.645718166s
	I1004 04:46:26.812764   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .DriverName
	I1004 04:46:26.813004   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetIP
	I1004 04:46:26.816258   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.816679   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:26.816724   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.816912   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .DriverName
	I1004 04:46:26.817394   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .DriverName
	I1004 04:46:26.817599   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .DriverName
	I1004 04:46:26.817689   76445 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:46:26.817727   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHHostname
	I1004 04:46:26.817817   76445 ssh_runner.go:195] Run: cat /version.json
	I1004 04:46:26.817843   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHHostname
	I1004 04:46:26.820600   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.820787   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.820999   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:26.821025   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.821171   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHPort
	I1004 04:46:26.821295   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:26.821322   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:26.821327   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:26.821488   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHPort
	I1004 04:46:26.821491   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHUsername
	I1004 04:46:26.821621   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:26.821680   76445 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/custom-flannel-204413/id_rsa Username:docker}
	I1004 04:46:26.821751   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHUsername
	I1004 04:46:26.821884   76445 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/custom-flannel-204413/id_rsa Username:docker}
	I1004 04:46:26.925769   76445 ssh_runner.go:195] Run: systemctl --version
	I1004 04:46:26.932790   76445 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:46:27.097653   76445 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:46:27.105027   76445 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:46:27.105081   76445 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:46:27.121883   76445 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:46:27.121903   76445 start.go:495] detecting cgroup driver to use...
	I1004 04:46:27.121964   76445 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:46:27.139124   76445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:46:27.153299   76445 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:46:27.153357   76445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:46:27.167351   76445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:46:27.181668   76445 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:46:27.324001   76445 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:46:27.477575   76445 docker.go:233] disabling docker service ...
	I1004 04:46:27.477638   76445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:46:27.494659   76445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:46:27.510659   76445 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:46:27.667626   76445 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:46:27.805693   76445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:46:27.821051   76445 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:46:27.839746   76445 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:46:27.839839   76445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:46:27.850151   76445 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:46:27.850209   76445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:46:27.860635   76445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:46:27.870969   76445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:46:27.881638   76445 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:46:27.892314   76445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:46:27.904398   76445 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:46:27.922512   76445 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:46:27.933512   76445 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:46:27.945727   76445 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:46:27.945779   76445 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:46:27.959550   76445 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:46:27.970174   76445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:46:28.100800   76445 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:46:28.201605   76445 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:46:28.201676   76445 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:46:28.208401   76445 start.go:563] Will wait 60s for crictl version
	I1004 04:46:28.208463   76445 ssh_runner.go:195] Run: which crictl
	I1004 04:46:28.212509   76445 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:46:28.259743   76445 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:46:28.259852   76445 ssh_runner.go:195] Run: crio --version
	I1004 04:46:28.291324   76445 ssh_runner.go:195] Run: crio --version
	I1004 04:46:28.324621   76445 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:46:26.815029   77883 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1004 04:46:26.815249   77883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:46:26.815296   77883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:46:26.834691   77883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
	I1004 04:46:26.835243   77883 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:46:26.835857   77883 main.go:141] libmachine: Using API Version  1
	I1004 04:46:26.835881   77883 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:46:26.836248   77883 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:46:26.836444   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetMachineName
	I1004 04:46:26.836605   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .DriverName
	I1004 04:46:26.836752   77883 start.go:159] libmachine.API.Create for "enable-default-cni-204413" (driver="kvm2")
	I1004 04:46:26.836783   77883 client.go:168] LocalClient.Create starting
	I1004 04:46:26.836814   77883 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem
	I1004 04:46:26.836855   77883 main.go:141] libmachine: Decoding PEM data...
	I1004 04:46:26.836876   77883 main.go:141] libmachine: Parsing certificate...
	I1004 04:46:26.836953   77883 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem
	I1004 04:46:26.836977   77883 main.go:141] libmachine: Decoding PEM data...
	I1004 04:46:26.836995   77883 main.go:141] libmachine: Parsing certificate...
	I1004 04:46:26.837018   77883 main.go:141] libmachine: Running pre-create checks...
	I1004 04:46:26.837029   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .PreCreateCheck
	I1004 04:46:26.837464   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetConfigRaw
	I1004 04:46:26.837927   77883 main.go:141] libmachine: Creating machine...
	I1004 04:46:26.837943   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .Create
	I1004 04:46:26.838084   77883 main.go:141] libmachine: (enable-default-cni-204413) Creating KVM machine...
	I1004 04:46:26.839262   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found existing default KVM network
	I1004 04:46:26.840632   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:26.840493   78000 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0a:17:6f} reservation:<nil>}
	I1004 04:46:26.841841   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:26.841752   78000 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a6:cd:b6} reservation:<nil>}
	I1004 04:46:26.842871   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:26.842800   78000 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:64:10:50} reservation:<nil>}
	I1004 04:46:26.844043   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:26.843957   78000 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003811b0}
	I1004 04:46:26.844151   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | created network xml: 
	I1004 04:46:26.844172   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | <network>
	I1004 04:46:26.844181   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG |   <name>mk-enable-default-cni-204413</name>
	I1004 04:46:26.844193   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG |   <dns enable='no'/>
	I1004 04:46:26.844200   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG |   
	I1004 04:46:26.844215   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1004 04:46:26.844231   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG |     <dhcp>
	I1004 04:46:26.844243   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1004 04:46:26.844254   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG |     </dhcp>
	I1004 04:46:26.844279   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG |   </ip>
	I1004 04:46:26.844300   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG |   
	I1004 04:46:26.844312   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | </network>
	I1004 04:46:26.844323   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | 
	I1004 04:46:26.849431   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | trying to create private KVM network mk-enable-default-cni-204413 192.168.72.0/24...
	I1004 04:46:26.922715   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | private KVM network mk-enable-default-cni-204413 192.168.72.0/24 created
	I1004 04:46:26.922744   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:26.922688   78000 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:46:26.922758   77883 main.go:141] libmachine: (enable-default-cni-204413) Setting up store path in /home/jenkins/minikube-integration/19546-9647/.minikube/machines/enable-default-cni-204413 ...
	I1004 04:46:26.922775   77883 main.go:141] libmachine: (enable-default-cni-204413) Building disk image from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 04:46:26.922953   77883 main.go:141] libmachine: (enable-default-cni-204413) Downloading /home/jenkins/minikube-integration/19546-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1004 04:46:27.166827   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:27.166697   78000 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/enable-default-cni-204413/id_rsa...
	I1004 04:46:27.275289   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:27.275171   78000 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/enable-default-cni-204413/enable-default-cni-204413.rawdisk...
	I1004 04:46:27.275315   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | Writing magic tar header
	I1004 04:46:27.275350   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | Writing SSH key tar header
	I1004 04:46:27.275404   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:27.275334   78000 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/enable-default-cni-204413 ...
	I1004 04:46:27.275524   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/enable-default-cni-204413
	I1004 04:46:27.275556   77883 main.go:141] libmachine: (enable-default-cni-204413) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines/enable-default-cni-204413 (perms=drwx------)
	I1004 04:46:27.275567   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube/machines
	I1004 04:46:27.275581   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:46:27.275591   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19546-9647
	I1004 04:46:27.275606   77883 main.go:141] libmachine: (enable-default-cni-204413) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube/machines (perms=drwxr-xr-x)
	I1004 04:46:27.275622   77883 main.go:141] libmachine: (enable-default-cni-204413) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647/.minikube (perms=drwxr-xr-x)
	I1004 04:46:27.275635   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 04:46:27.275646   77883 main.go:141] libmachine: (enable-default-cni-204413) Setting executable bit set on /home/jenkins/minikube-integration/19546-9647 (perms=drwxrwxr-x)
	I1004 04:46:27.275661   77883 main.go:141] libmachine: (enable-default-cni-204413) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 04:46:27.275672   77883 main.go:141] libmachine: (enable-default-cni-204413) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 04:46:27.275687   77883 main.go:141] libmachine: (enable-default-cni-204413) Creating domain...
	I1004 04:46:27.275703   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | Checking permissions on dir: /home/jenkins
	I1004 04:46:27.275719   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | Checking permissions on dir: /home
	I1004 04:46:27.275733   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | Skipping /home - not owner
	I1004 04:46:27.276926   77883 main.go:141] libmachine: (enable-default-cni-204413) define libvirt domain using xml: 
	I1004 04:46:27.276942   77883 main.go:141] libmachine: (enable-default-cni-204413) <domain type='kvm'>
	I1004 04:46:27.276950   77883 main.go:141] libmachine: (enable-default-cni-204413)   <name>enable-default-cni-204413</name>
	I1004 04:46:27.276955   77883 main.go:141] libmachine: (enable-default-cni-204413)   <memory unit='MiB'>3072</memory>
	I1004 04:46:27.276960   77883 main.go:141] libmachine: (enable-default-cni-204413)   <vcpu>2</vcpu>
	I1004 04:46:27.276963   77883 main.go:141] libmachine: (enable-default-cni-204413)   <features>
	I1004 04:46:27.276970   77883 main.go:141] libmachine: (enable-default-cni-204413)     <acpi/>
	I1004 04:46:27.276974   77883 main.go:141] libmachine: (enable-default-cni-204413)     <apic/>
	I1004 04:46:27.276979   77883 main.go:141] libmachine: (enable-default-cni-204413)     <pae/>
	I1004 04:46:27.276983   77883 main.go:141] libmachine: (enable-default-cni-204413)     
	I1004 04:46:27.276988   77883 main.go:141] libmachine: (enable-default-cni-204413)   </features>
	I1004 04:46:27.276998   77883 main.go:141] libmachine: (enable-default-cni-204413)   <cpu mode='host-passthrough'>
	I1004 04:46:27.277003   77883 main.go:141] libmachine: (enable-default-cni-204413)   
	I1004 04:46:27.277008   77883 main.go:141] libmachine: (enable-default-cni-204413)   </cpu>
	I1004 04:46:27.277013   77883 main.go:141] libmachine: (enable-default-cni-204413)   <os>
	I1004 04:46:27.277018   77883 main.go:141] libmachine: (enable-default-cni-204413)     <type>hvm</type>
	I1004 04:46:27.277023   77883 main.go:141] libmachine: (enable-default-cni-204413)     <boot dev='cdrom'/>
	I1004 04:46:27.277030   77883 main.go:141] libmachine: (enable-default-cni-204413)     <boot dev='hd'/>
	I1004 04:46:27.277041   77883 main.go:141] libmachine: (enable-default-cni-204413)     <bootmenu enable='no'/>
	I1004 04:46:27.277048   77883 main.go:141] libmachine: (enable-default-cni-204413)   </os>
	I1004 04:46:27.277052   77883 main.go:141] libmachine: (enable-default-cni-204413)   <devices>
	I1004 04:46:27.277063   77883 main.go:141] libmachine: (enable-default-cni-204413)     <disk type='file' device='cdrom'>
	I1004 04:46:27.277096   77883 main.go:141] libmachine: (enable-default-cni-204413)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/enable-default-cni-204413/boot2docker.iso'/>
	I1004 04:46:27.277117   77883 main.go:141] libmachine: (enable-default-cni-204413)       <target dev='hdc' bus='scsi'/>
	I1004 04:46:27.277152   77883 main.go:141] libmachine: (enable-default-cni-204413)       <readonly/>
	I1004 04:46:27.277171   77883 main.go:141] libmachine: (enable-default-cni-204413)     </disk>
	I1004 04:46:27.277182   77883 main.go:141] libmachine: (enable-default-cni-204413)     <disk type='file' device='disk'>
	I1004 04:46:27.277195   77883 main.go:141] libmachine: (enable-default-cni-204413)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 04:46:27.277224   77883 main.go:141] libmachine: (enable-default-cni-204413)       <source file='/home/jenkins/minikube-integration/19546-9647/.minikube/machines/enable-default-cni-204413/enable-default-cni-204413.rawdisk'/>
	I1004 04:46:27.277231   77883 main.go:141] libmachine: (enable-default-cni-204413)       <target dev='hda' bus='virtio'/>
	I1004 04:46:27.277237   77883 main.go:141] libmachine: (enable-default-cni-204413)     </disk>
	I1004 04:46:27.277244   77883 main.go:141] libmachine: (enable-default-cni-204413)     <interface type='network'>
	I1004 04:46:27.277253   77883 main.go:141] libmachine: (enable-default-cni-204413)       <source network='mk-enable-default-cni-204413'/>
	I1004 04:46:27.277263   77883 main.go:141] libmachine: (enable-default-cni-204413)       <model type='virtio'/>
	I1004 04:46:27.277281   77883 main.go:141] libmachine: (enable-default-cni-204413)     </interface>
	I1004 04:46:27.277292   77883 main.go:141] libmachine: (enable-default-cni-204413)     <interface type='network'>
	I1004 04:46:27.277303   77883 main.go:141] libmachine: (enable-default-cni-204413)       <source network='default'/>
	I1004 04:46:27.277316   77883 main.go:141] libmachine: (enable-default-cni-204413)       <model type='virtio'/>
	I1004 04:46:27.277332   77883 main.go:141] libmachine: (enable-default-cni-204413)     </interface>
	I1004 04:46:27.277346   77883 main.go:141] libmachine: (enable-default-cni-204413)     <serial type='pty'>
	I1004 04:46:27.277366   77883 main.go:141] libmachine: (enable-default-cni-204413)       <target port='0'/>
	I1004 04:46:27.277376   77883 main.go:141] libmachine: (enable-default-cni-204413)     </serial>
	I1004 04:46:27.277386   77883 main.go:141] libmachine: (enable-default-cni-204413)     <console type='pty'>
	I1004 04:46:27.277398   77883 main.go:141] libmachine: (enable-default-cni-204413)       <target type='serial' port='0'/>
	I1004 04:46:27.277410   77883 main.go:141] libmachine: (enable-default-cni-204413)     </console>
	I1004 04:46:27.277426   77883 main.go:141] libmachine: (enable-default-cni-204413)     <rng model='virtio'>
	I1004 04:46:27.277439   77883 main.go:141] libmachine: (enable-default-cni-204413)       <backend model='random'>/dev/random</backend>
	I1004 04:46:27.277450   77883 main.go:141] libmachine: (enable-default-cni-204413)     </rng>
	I1004 04:46:27.277460   77883 main.go:141] libmachine: (enable-default-cni-204413)     
	I1004 04:46:27.277470   77883 main.go:141] libmachine: (enable-default-cni-204413)     
	I1004 04:46:27.277480   77883 main.go:141] libmachine: (enable-default-cni-204413)   </devices>
	I1004 04:46:27.277495   77883 main.go:141] libmachine: (enable-default-cni-204413) </domain>
	I1004 04:46:27.277511   77883 main.go:141] libmachine: (enable-default-cni-204413) 
	I1004 04:46:27.281926   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:02:f4:58 in network default
	I1004 04:46:27.282633   77883 main.go:141] libmachine: (enable-default-cni-204413) Ensuring networks are active...
	I1004 04:46:27.282651   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:27.283410   77883 main.go:141] libmachine: (enable-default-cni-204413) Ensuring network default is active
	I1004 04:46:27.283761   77883 main.go:141] libmachine: (enable-default-cni-204413) Ensuring network mk-enable-default-cni-204413 is active
	I1004 04:46:27.284379   77883 main.go:141] libmachine: (enable-default-cni-204413) Getting domain xml...
	I1004 04:46:27.285115   77883 main.go:141] libmachine: (enable-default-cni-204413) Creating domain...
	I1004 04:46:28.632592   77883 main.go:141] libmachine: (enable-default-cni-204413) Waiting to get IP...
	I1004 04:46:28.633519   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:28.634024   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find current IP address of domain enable-default-cni-204413 in network mk-enable-default-cni-204413
	I1004 04:46:28.634064   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:28.634011   78000 retry.go:31] will retry after 222.03357ms: waiting for machine to come up
	I1004 04:46:28.857758   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:28.858303   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find current IP address of domain enable-default-cni-204413 in network mk-enable-default-cni-204413
	I1004 04:46:28.858331   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:28.858227   78000 retry.go:31] will retry after 378.846377ms: waiting for machine to come up
	I1004 04:46:29.238540   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:29.239012   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find current IP address of domain enable-default-cni-204413 in network mk-enable-default-cni-204413
	I1004 04:46:29.239035   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:29.238973   78000 retry.go:31] will retry after 468.825348ms: waiting for machine to come up
	I1004 04:46:29.709975   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:29.710402   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find current IP address of domain enable-default-cni-204413 in network mk-enable-default-cni-204413
	I1004 04:46:29.710423   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:29.710360   78000 retry.go:31] will retry after 516.409897ms: waiting for machine to come up
	I1004 04:46:30.228318   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:30.228791   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find current IP address of domain enable-default-cni-204413 in network mk-enable-default-cni-204413
	I1004 04:46:30.228831   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:30.228768   78000 retry.go:31] will retry after 512.409403ms: waiting for machine to come up
	I1004 04:46:25.741979   74599 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-hqgwj" in "kube-system" namespace has status "Ready":"False"
	I1004 04:46:28.242871   74599 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-hqgwj" in "kube-system" namespace has status "Ready":"False"
	I1004 04:46:30.243571   74599 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-hqgwj" in "kube-system" namespace has status "Ready":"False"
	I1004 04:46:28.325849   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetIP
	I1004 04:46:28.329077   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:28.329598   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:28.329627   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:28.329825   76445 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1004 04:46:28.334253   76445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:46:28.348765   76445 kubeadm.go:883] updating cluster {Name:custom-flannel-204413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:custom-flannel-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.50.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:46:28.348934   76445 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:46:28.348999   76445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:46:28.383614   76445 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:46:28.383678   76445 ssh_runner.go:195] Run: which lz4
	I1004 04:46:28.387732   76445 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:46:28.392072   76445 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:46:28.392094   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:46:29.801002   76445 crio.go:462] duration metric: took 1.413317668s to copy over tarball
	I1004 04:46:29.801082   76445 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:46:31.750772   74599 pod_ready.go:93] pod "calico-kube-controllers-b8d8894fb-hqgwj" in "kube-system" namespace has status "Ready":"True"
	I1004 04:46:31.750806   74599 pod_ready.go:82] duration metric: took 21.016342115s for pod "calico-kube-controllers-b8d8894fb-hqgwj" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:31.750840   74599 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-j8qgw" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:31.759506   74599 pod_ready.go:93] pod "calico-node-j8qgw" in "kube-system" namespace has status "Ready":"True"
	I1004 04:46:31.759534   74599 pod_ready.go:82] duration metric: took 8.685363ms for pod "calico-node-j8qgw" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:31.759546   74599 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-v9vv8" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:31.770994   74599 pod_ready.go:93] pod "coredns-7c65d6cfc9-v9vv8" in "kube-system" namespace has status "Ready":"True"
	I1004 04:46:31.771020   74599 pod_ready.go:82] duration metric: took 11.465623ms for pod "coredns-7c65d6cfc9-v9vv8" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:31.771032   74599 pod_ready.go:79] waiting up to 15m0s for pod "etcd-calico-204413" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:31.776362   74599 pod_ready.go:93] pod "etcd-calico-204413" in "kube-system" namespace has status "Ready":"True"
	I1004 04:46:31.776390   74599 pod_ready.go:82] duration metric: took 5.350167ms for pod "etcd-calico-204413" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:31.776403   74599 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-calico-204413" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:31.782153   74599 pod_ready.go:93] pod "kube-apiserver-calico-204413" in "kube-system" namespace has status "Ready":"True"
	I1004 04:46:31.782178   74599 pod_ready.go:82] duration metric: took 5.765269ms for pod "kube-apiserver-calico-204413" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:31.782189   74599 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-calico-204413" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:32.139332   74599 pod_ready.go:93] pod "kube-controller-manager-calico-204413" in "kube-system" namespace has status "Ready":"True"
	I1004 04:46:32.139359   74599 pod_ready.go:82] duration metric: took 357.162149ms for pod "kube-controller-manager-calico-204413" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:32.139369   74599 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-p5q7v" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:32.539008   74599 pod_ready.go:93] pod "kube-proxy-p5q7v" in "kube-system" namespace has status "Ready":"True"
	I1004 04:46:32.539038   74599 pod_ready.go:82] duration metric: took 399.661788ms for pod "kube-proxy-p5q7v" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:32.539051   74599 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-calico-204413" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:32.938475   74599 pod_ready.go:93] pod "kube-scheduler-calico-204413" in "kube-system" namespace has status "Ready":"True"
	I1004 04:46:32.938501   74599 pod_ready.go:82] duration metric: took 399.442074ms for pod "kube-scheduler-calico-204413" in "kube-system" namespace to be "Ready" ...
	I1004 04:46:32.938515   74599 pod_ready.go:39] duration metric: took 22.212646383s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:46:32.938531   74599 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:46:32.938585   74599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:46:32.955556   74599 api_server.go:72] duration metric: took 31.543480128s to wait for apiserver process to appear ...
	I1004 04:46:32.955582   74599 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:46:32.955599   74599 api_server.go:253] Checking apiserver healthz at https://192.168.61.159:8443/healthz ...
	I1004 04:46:32.961255   74599 api_server.go:279] https://192.168.61.159:8443/healthz returned 200:
	ok
	I1004 04:46:32.962155   74599 api_server.go:141] control plane version: v1.31.1
	I1004 04:46:32.962176   74599 api_server.go:131] duration metric: took 6.587624ms to wait for apiserver health ...
	I1004 04:46:32.962185   74599 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:46:33.145201   74599 system_pods.go:59] 9 kube-system pods found
	I1004 04:46:33.145230   74599 system_pods.go:61] "calico-kube-controllers-b8d8894fb-hqgwj" [64ac1f80-684e-474b-8cf4-d5fe3ea61d89] Running
	I1004 04:46:33.145239   74599 system_pods.go:61] "calico-node-j8qgw" [0a8b7310-e263-4783-8689-cf061af8a79b] Running
	I1004 04:46:33.145245   74599 system_pods.go:61] "coredns-7c65d6cfc9-v9vv8" [92b1b3e5-12b9-4d41-ae0e-188c94844335] Running
	I1004 04:46:33.145250   74599 system_pods.go:61] "etcd-calico-204413" [ec8f84c5-1407-4236-9baf-751049230097] Running
	I1004 04:46:33.145262   74599 system_pods.go:61] "kube-apiserver-calico-204413" [15cfcd6d-4473-4c60-a1ef-95f3ef8c67b9] Running
	I1004 04:46:33.145267   74599 system_pods.go:61] "kube-controller-manager-calico-204413" [b7daccdb-206a-41d3-b792-103910fbfa73] Running
	I1004 04:46:33.145272   74599 system_pods.go:61] "kube-proxy-p5q7v" [bd92325b-a50c-4d36-9cbc-4a06f12c601b] Running
	I1004 04:46:33.145276   74599 system_pods.go:61] "kube-scheduler-calico-204413" [470cb208-f5a2-48dc-8fe4-e88023b6451b] Running
	I1004 04:46:33.145281   74599 system_pods.go:61] "storage-provisioner" [7e2d2795-646e-4136-89f1-fcbd439de256] Running
	I1004 04:46:33.145288   74599 system_pods.go:74] duration metric: took 183.095985ms to wait for pod list to return data ...
	I1004 04:46:33.145300   74599 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:46:33.339249   74599 default_sa.go:45] found service account: "default"
	I1004 04:46:33.339278   74599 default_sa.go:55] duration metric: took 193.971708ms for default service account to be created ...
	I1004 04:46:33.339286   74599 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:46:33.543917   74599 system_pods.go:86] 9 kube-system pods found
	I1004 04:46:33.543949   74599 system_pods.go:89] "calico-kube-controllers-b8d8894fb-hqgwj" [64ac1f80-684e-474b-8cf4-d5fe3ea61d89] Running
	I1004 04:46:33.543958   74599 system_pods.go:89] "calico-node-j8qgw" [0a8b7310-e263-4783-8689-cf061af8a79b] Running
	I1004 04:46:33.543965   74599 system_pods.go:89] "coredns-7c65d6cfc9-v9vv8" [92b1b3e5-12b9-4d41-ae0e-188c94844335] Running
	I1004 04:46:33.543970   74599 system_pods.go:89] "etcd-calico-204413" [ec8f84c5-1407-4236-9baf-751049230097] Running
	I1004 04:46:33.543976   74599 system_pods.go:89] "kube-apiserver-calico-204413" [15cfcd6d-4473-4c60-a1ef-95f3ef8c67b9] Running
	I1004 04:46:33.543982   74599 system_pods.go:89] "kube-controller-manager-calico-204413" [b7daccdb-206a-41d3-b792-103910fbfa73] Running
	I1004 04:46:33.543989   74599 system_pods.go:89] "kube-proxy-p5q7v" [bd92325b-a50c-4d36-9cbc-4a06f12c601b] Running
	I1004 04:46:33.543994   74599 system_pods.go:89] "kube-scheduler-calico-204413" [470cb208-f5a2-48dc-8fe4-e88023b6451b] Running
	I1004 04:46:33.544001   74599 system_pods.go:89] "storage-provisioner" [7e2d2795-646e-4136-89f1-fcbd439de256] Running
	I1004 04:46:33.544008   74599 system_pods.go:126] duration metric: took 204.716543ms to wait for k8s-apps to be running ...
	I1004 04:46:33.544020   74599 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:46:33.544068   74599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:46:33.560952   74599 system_svc.go:56] duration metric: took 16.916146ms WaitForService to wait for kubelet
	I1004 04:46:33.561037   74599 kubeadm.go:582] duration metric: took 32.148961796s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:46:33.561072   74599 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:46:33.740493   74599 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:46:33.740524   74599 node_conditions.go:123] node cpu capacity is 2
	I1004 04:46:33.740539   74599 node_conditions.go:105] duration metric: took 179.452453ms to run NodePressure ...
	I1004 04:46:33.740553   74599 start.go:241] waiting for startup goroutines ...
	I1004 04:46:33.740563   74599 start.go:246] waiting for cluster config update ...
	I1004 04:46:33.740578   74599 start.go:255] writing updated cluster config ...
	I1004 04:46:33.798443   74599 ssh_runner.go:195] Run: rm -f paused
	I1004 04:46:33.863204   74599 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:46:34.003648   74599 out.go:177] * Done! kubectl is now configured to use "calico-204413" cluster and "default" namespace by default
	I1004 04:46:32.249919   76445 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.44880472s)
	I1004 04:46:32.249957   76445 crio.go:469] duration metric: took 2.448926724s to extract the tarball
	I1004 04:46:32.249968   76445 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:46:32.288898   76445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:46:32.341321   76445 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:46:32.341343   76445 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:46:32.341353   76445 kubeadm.go:934] updating node { 192.168.50.4 8443 v1.31.1 crio true true} ...
	I1004 04:46:32.341468   76445 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-204413 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1004 04:46:32.341542   76445 ssh_runner.go:195] Run: crio config
	I1004 04:46:32.387949   76445 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1004 04:46:32.387992   76445 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:46:32.388022   76445 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.4 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-204413 NodeName:custom-flannel-204413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:46:32.388153   76445 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-204413"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:46:32.388221   76445 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:46:32.398843   76445 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:46:32.398900   76445 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:46:32.408775   76445 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1004 04:46:32.427465   76445 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:46:32.445781   76445 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1004 04:46:32.464330   76445 ssh_runner.go:195] Run: grep 192.168.50.4	control-plane.minikube.internal$ /etc/hosts
	I1004 04:46:32.468574   76445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:46:32.483644   76445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:46:32.617809   76445 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:46:32.640415   76445 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413 for IP: 192.168.50.4
	I1004 04:46:32.640435   76445 certs.go:194] generating shared ca certs ...
	I1004 04:46:32.640454   76445 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:32.640631   76445 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:46:32.640682   76445 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:46:32.640693   76445 certs.go:256] generating profile certs ...
	I1004 04:46:32.640764   76445 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/client.key
	I1004 04:46:32.640793   76445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/client.crt with IP's: []
	I1004 04:46:32.971698   76445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/client.crt ...
	I1004 04:46:32.971724   76445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/client.crt: {Name:mk28a40928509857f9f66f79a9e7022325d0e8ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:32.971904   76445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/client.key ...
	I1004 04:46:32.971919   76445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/client.key: {Name:mk767938c21fa3315e3de347f40b044bfdb5923a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:32.971996   76445 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/apiserver.key.10d69239
	I1004 04:46:32.972012   76445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/apiserver.crt.10d69239 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.4]
	I1004 04:46:33.283436   76445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/apiserver.crt.10d69239 ...
	I1004 04:46:33.283465   76445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/apiserver.crt.10d69239: {Name:mk921a7213f3606735afbc068a0a48a1689026c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:33.283629   76445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/apiserver.key.10d69239 ...
	I1004 04:46:33.283661   76445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/apiserver.key.10d69239: {Name:mk408bd6a9b6d25d6ac01a093c7eaf1e8a8226b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:33.283764   76445 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/apiserver.crt.10d69239 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/apiserver.crt
	I1004 04:46:33.283883   76445 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/apiserver.key.10d69239 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/apiserver.key
	I1004 04:46:33.283946   76445 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/proxy-client.key
	I1004 04:46:33.283962   76445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/proxy-client.crt with IP's: []
	I1004 04:46:33.397698   76445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/proxy-client.crt ...
	I1004 04:46:33.397748   76445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/proxy-client.crt: {Name:mk4f5aaee6e79bbbe510833158e3edbec91030bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:33.397922   76445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/proxy-client.key ...
	I1004 04:46:33.397937   76445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/proxy-client.key: {Name:mk5e79aa2397eaa6574ce519c3178c09d30e49fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:33.398132   76445 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:46:33.398169   76445 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:46:33.398178   76445 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:46:33.398199   76445 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:46:33.398224   76445 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:46:33.398244   76445 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:46:33.398285   76445 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:46:33.398862   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:46:33.438051   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:46:33.476720   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:46:33.507894   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:46:33.540505   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1004 04:46:33.574770   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:46:33.600220   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:46:33.627735   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/custom-flannel-204413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:46:33.655157   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:46:33.682931   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:46:33.711069   76445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:46:33.735204   76445 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:46:33.752560   76445 ssh_runner.go:195] Run: openssl version
	I1004 04:46:33.758573   76445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:46:33.769593   76445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:46:33.774230   76445 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:46:33.774279   76445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:46:33.780538   76445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:46:33.793376   76445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:46:33.805956   76445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:46:33.811154   76445 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:46:33.811206   76445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:46:33.819753   76445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:46:33.836715   76445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:46:33.853332   76445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:46:33.858538   76445 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:46:33.858589   76445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:46:33.866643   76445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:46:33.879331   76445 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:46:33.884220   76445 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 04:46:33.884277   76445 kubeadm.go:392] StartCluster: {Name:custom-flannel-204413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.1 ClusterName:custom-flannel-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.50.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:46:33.884360   76445 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:46:33.884406   76445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:46:33.925268   76445 cri.go:89] found id: ""
	I1004 04:46:33.925340   76445 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:46:33.936621   76445 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:46:33.946927   76445 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:46:33.956768   76445 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:46:33.956786   76445 kubeadm.go:157] found existing configuration files:
	
	I1004 04:46:33.956832   76445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:46:33.967313   76445 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:46:33.967375   76445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:46:33.976967   76445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:46:33.986028   76445 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:46:33.986082   76445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:46:33.997064   76445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:46:34.007373   76445 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:46:34.007423   76445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:46:34.017396   76445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:46:34.027128   76445 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:46:34.027181   76445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:46:34.036698   76445 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:46:34.096552   76445 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 04:46:34.096617   76445 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:46:34.230402   76445 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:46:34.230556   76445 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:46:34.230669   76445 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 04:46:34.242025   76445 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:46:30.742505   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:30.743183   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find current IP address of domain enable-default-cni-204413 in network mk-enable-default-cni-204413
	I1004 04:46:30.743213   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:30.743058   78000 retry.go:31] will retry after 583.332692ms: waiting for machine to come up
	I1004 04:46:31.327874   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:31.328502   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find current IP address of domain enable-default-cni-204413 in network mk-enable-default-cni-204413
	I1004 04:46:31.328529   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:31.328457   78000 retry.go:31] will retry after 967.43678ms: waiting for machine to come up
	I1004 04:46:32.297565   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:32.298146   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find current IP address of domain enable-default-cni-204413 in network mk-enable-default-cni-204413
	I1004 04:46:32.298177   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:32.298096   78000 retry.go:31] will retry after 1.289909988s: waiting for machine to come up
	I1004 04:46:33.589929   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:33.590402   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find current IP address of domain enable-default-cni-204413 in network mk-enable-default-cni-204413
	I1004 04:46:33.590434   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:33.590341   78000 retry.go:31] will retry after 1.37054624s: waiting for machine to come up
	I1004 04:46:34.962865   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:34.963251   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find current IP address of domain enable-default-cni-204413 in network mk-enable-default-cni-204413
	I1004 04:46:34.963286   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:34.963212   78000 retry.go:31] will retry after 2.121409384s: waiting for machine to come up
	I1004 04:46:34.372075   76445 out.go:235]   - Generating certificates and keys ...
	I1004 04:46:34.372238   76445 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:46:34.372336   76445 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:46:34.550228   76445 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 04:46:34.762420   76445 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1004 04:46:35.111032   76445 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1004 04:46:35.291847   76445 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1004 04:46:35.506958   76445 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1004 04:46:35.507156   76445 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-204413 localhost] and IPs [192.168.50.4 127.0.0.1 ::1]
	I1004 04:46:35.657318   76445 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1004 04:46:35.657504   76445 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-204413 localhost] and IPs [192.168.50.4 127.0.0.1 ::1]
	I1004 04:46:35.945303   76445 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 04:46:36.309135   76445 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 04:46:36.491643   76445 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1004 04:46:36.491754   76445 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:46:36.682724   76445 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:46:36.755623   76445 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 04:46:36.896280   76445 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:46:36.969551   76445 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:46:37.329723   76445 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:46:37.330550   76445 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:46:37.333181   76445 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:46:37.086695   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:37.087341   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find current IP address of domain enable-default-cni-204413 in network mk-enable-default-cni-204413
	I1004 04:46:37.087373   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:37.087296   78000 retry.go:31] will retry after 2.663477826s: waiting for machine to come up
	I1004 04:46:39.753706   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:39.754350   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find current IP address of domain enable-default-cni-204413 in network mk-enable-default-cni-204413
	I1004 04:46:39.754378   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:39.754277   78000 retry.go:31] will retry after 3.565649962s: waiting for machine to come up
	I1004 04:46:37.335094   76445 out.go:235]   - Booting up control plane ...
	I1004 04:46:37.335221   76445 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:46:37.335335   76445 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:46:37.335434   76445 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:46:37.353714   76445 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:46:37.361633   76445 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:46:37.361701   76445 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:46:37.526157   76445 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 04:46:37.526337   76445 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 04:46:38.028045   76445 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.354919ms
	I1004 04:46:38.028163   76445 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 04:46:43.527417   76445 kubeadm.go:310] [api-check] The API server is healthy after 5.502610927s
	I1004 04:46:43.546308   76445 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 04:46:43.569145   76445 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 04:46:43.616714   76445 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 04:46:43.616978   76445 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-204413 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 04:46:43.638097   76445 kubeadm.go:310] [bootstrap-token] Using token: i0i8qq.vuiakvbxagmhcvme
	I1004 04:46:43.639670   76445 out.go:235]   - Configuring RBAC rules ...
	I1004 04:46:43.639840   76445 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 04:46:43.646513   76445 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 04:46:43.662051   76445 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 04:46:43.667272   76445 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 04:46:43.670740   76445 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 04:46:43.674669   76445 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 04:46:43.937040   76445 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 04:46:44.376648   76445 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 04:46:44.935208   76445 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 04:46:44.938279   76445 kubeadm.go:310] 
	I1004 04:46:44.938381   76445 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 04:46:44.938405   76445 kubeadm.go:310] 
	I1004 04:46:44.938520   76445 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 04:46:44.938534   76445 kubeadm.go:310] 
	I1004 04:46:44.938564   76445 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 04:46:44.938633   76445 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 04:46:44.938708   76445 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 04:46:44.938719   76445 kubeadm.go:310] 
	I1004 04:46:44.938782   76445 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 04:46:44.938790   76445 kubeadm.go:310] 
	I1004 04:46:44.938845   76445 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 04:46:44.938854   76445 kubeadm.go:310] 
	I1004 04:46:44.938949   76445 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 04:46:44.939056   76445 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 04:46:44.939135   76445 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 04:46:44.939141   76445 kubeadm.go:310] 
	I1004 04:46:44.939218   76445 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 04:46:44.939288   76445 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 04:46:44.939297   76445 kubeadm.go:310] 
	I1004 04:46:44.939365   76445 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i0i8qq.vuiakvbxagmhcvme \
	I1004 04:46:44.939458   76445 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 \
	I1004 04:46:44.939477   76445 kubeadm.go:310] 	--control-plane 
	I1004 04:46:44.939483   76445 kubeadm.go:310] 
	I1004 04:46:44.939551   76445 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 04:46:44.939560   76445 kubeadm.go:310] 
	I1004 04:46:44.939637   76445 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i0i8qq.vuiakvbxagmhcvme \
	I1004 04:46:44.939740   76445 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 
	I1004 04:46:44.940100   76445 kubeadm.go:310] W1004 04:46:34.075843     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:46:44.940537   76445 kubeadm.go:310] W1004 04:46:34.076667     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:46:44.940653   76445 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:46:44.940668   76445 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1004 04:46:44.942242   76445 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1004 04:46:43.320985   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:43.321464   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find current IP address of domain enable-default-cni-204413 in network mk-enable-default-cni-204413
	I1004 04:46:43.321482   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:43.321386   78000 retry.go:31] will retry after 2.985201138s: waiting for machine to come up
	I1004 04:46:44.943487   76445 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1004 04:46:44.943541   76445 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1004 04:46:44.948519   76445 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1004 04:46:44.948539   76445 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1004 04:46:44.973432   76445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 04:46:45.423407   76445 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:46:45.423468   76445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:46:45.423538   76445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-204413 minikube.k8s.io/updated_at=2024_10_04T04_46_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=custom-flannel-204413 minikube.k8s.io/primary=true
	I1004 04:46:45.575241   76445 ops.go:34] apiserver oom_adj: -16
	I1004 04:46:45.593801   76445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:46:46.094777   76445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:46:46.594702   76445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:46:47.094817   76445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:46:47.594406   76445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:46:48.094267   76445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:46:48.594739   76445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:46:49.093871   76445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:46:49.594919   76445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:46:50.093969   76445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:46:50.186503   76445 kubeadm.go:1113] duration metric: took 4.763095044s to wait for elevateKubeSystemPrivileges
	I1004 04:46:50.186540   76445 kubeadm.go:394] duration metric: took 16.302266117s to StartCluster
	I1004 04:46:50.186562   76445 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:50.186635   76445 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:46:50.188118   76445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:50.188349   76445 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:46:50.188380   76445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 04:46:50.188492   76445 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:46:50.188559   76445 config.go:182] Loaded profile config "custom-flannel-204413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:46:50.188600   76445 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-204413"
	I1004 04:46:50.188618   76445 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-204413"
	I1004 04:46:50.188588   76445 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-204413"
	I1004 04:46:50.188678   76445 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-204413"
	I1004 04:46:50.188713   76445 host.go:66] Checking if "custom-flannel-204413" exists ...
	I1004 04:46:50.189006   76445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:46:50.189034   76445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:46:50.189052   76445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:46:50.189065   76445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:46:50.190071   76445 out.go:177] * Verifying Kubernetes components...
	I1004 04:46:50.191387   76445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:46:50.208920   76445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44811
	I1004 04:46:50.209121   76445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I1004 04:46:50.209498   76445 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:46:50.209584   76445 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:46:50.210134   76445 main.go:141] libmachine: Using API Version  1
	I1004 04:46:50.210163   76445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:46:50.210254   76445 main.go:141] libmachine: Using API Version  1
	I1004 04:46:50.210277   76445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:46:50.210650   76445 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:46:50.210697   76445 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:46:50.210836   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetState
	I1004 04:46:50.211247   76445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:46:50.211296   76445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:46:50.215396   76445 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-204413"
	I1004 04:46:50.215436   76445 host.go:66] Checking if "custom-flannel-204413" exists ...
	I1004 04:46:50.215813   76445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:46:50.215854   76445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:46:50.228251   76445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41789
	I1004 04:46:50.228776   76445 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:46:50.229274   76445 main.go:141] libmachine: Using API Version  1
	I1004 04:46:50.229300   76445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:46:50.229605   76445 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:46:50.229756   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetState
	I1004 04:46:50.232171   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .DriverName
	I1004 04:46:50.232604   76445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I1004 04:46:50.232934   76445 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:46:50.233352   76445 main.go:141] libmachine: Using API Version  1
	I1004 04:46:50.233376   76445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:46:50.233713   76445 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:46:50.233865   76445 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:46:46.310511   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:46.310956   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find current IP address of domain enable-default-cni-204413 in network mk-enable-default-cni-204413
	I1004 04:46:46.310981   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | I1004 04:46:46.310915   78000 retry.go:31] will retry after 3.857224783s: waiting for machine to come up
	I1004 04:46:50.169958   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.170399   77883 main.go:141] libmachine: (enable-default-cni-204413) Found IP for machine: 192.168.72.177
	I1004 04:46:50.170422   77883 main.go:141] libmachine: (enable-default-cni-204413) Reserving static IP address...
	I1004 04:46:50.170438   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has current primary IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.170814   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-204413", mac: "52:54:00:ba:cf:c2", ip: "192.168.72.177"} in network mk-enable-default-cni-204413
	I1004 04:46:50.258423   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | Getting to WaitForSSH function...
	I1004 04:46:50.258492   77883 main.go:141] libmachine: (enable-default-cni-204413) Reserved static IP address: 192.168.72.177
	I1004 04:46:50.258505   77883 main.go:141] libmachine: (enable-default-cni-204413) Waiting for SSH to be available...
	I1004 04:46:50.261076   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.261227   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:50.261257   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.261381   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | Using SSH client type: external
	I1004 04:46:50.261400   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/enable-default-cni-204413/id_rsa (-rw-------)
	I1004 04:46:50.261444   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/enable-default-cni-204413/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:46:50.261457   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | About to run SSH command:
	I1004 04:46:50.261468   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | exit 0
	I1004 04:46:50.388208   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | SSH cmd err, output: <nil>: 
	I1004 04:46:50.388499   77883 main.go:141] libmachine: (enable-default-cni-204413) KVM machine creation complete!
	I1004 04:46:50.388833   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetConfigRaw
	I1004 04:46:50.389534   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .DriverName
	I1004 04:46:50.389772   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .DriverName
	I1004 04:46:50.389951   77883 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 04:46:50.389970   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetState
	I1004 04:46:50.391590   77883 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 04:46:50.391605   77883 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 04:46:50.391613   77883 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 04:46:50.391620   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHHostname
	I1004 04:46:50.394340   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.394746   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:50.394791   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.394890   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHPort
	I1004 04:46:50.395065   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:50.395237   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:50.395380   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHUsername
	I1004 04:46:50.395553   77883 main.go:141] libmachine: Using SSH client type: native
	I1004 04:46:50.395806   77883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.177 22 <nil> <nil>}
	I1004 04:46:50.395819   77883 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 04:46:50.234111   76445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:46:50.234145   76445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:46:50.234996   76445 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:46:50.235015   76445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:46:50.235034   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHHostname
	I1004 04:46:50.240699   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:50.250696   76445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41245
	I1004 04:46:50.251187   76445 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:46:50.251600   76445 main.go:141] libmachine: Using API Version  1
	I1004 04:46:50.251616   76445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:46:50.252028   76445 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:46:50.252170   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetState
	I1004 04:46:50.255020   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .DriverName
	I1004 04:46:50.255213   76445 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:46:50.255226   76445 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:46:50.255238   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHHostname
	I1004 04:46:50.257960   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:50.259332   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:50.259904   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHPort
	I1004 04:46:50.259926   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHPort
	I1004 04:46:50.259969   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:50.259974   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:49:56", ip: ""} in network mk-custom-flannel-204413: {Iface:virbr2 ExpiryTime:2024-10-04 05:46:17 +0000 UTC Type:0 Mac:52:54:00:57:49:56 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:custom-flannel-204413 Clientid:01:52:54:00:57:49:56}
	I1004 04:46:50.259995   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | domain custom-flannel-204413 has defined IP address 192.168.50.4 and MAC address 52:54:00:57:49:56 in network mk-custom-flannel-204413
	I1004 04:46:50.260130   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:50.260179   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHKeyPath
	I1004 04:46:50.260408   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHUsername
	I1004 04:46:50.260409   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .GetSSHUsername
	I1004 04:46:50.260589   76445 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/custom-flannel-204413/id_rsa Username:docker}
	I1004 04:46:50.261180   76445 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/custom-flannel-204413/id_rsa Username:docker}
	I1004 04:46:50.559453   76445 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:46:50.559459   76445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 04:46:50.594281   76445 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-204413" to be "Ready" ...
	I1004 04:46:50.703341   76445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:46:50.704046   76445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:46:50.974224   76445 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1004 04:46:51.312321   76445 main.go:141] libmachine: Making call to close driver server
	I1004 04:46:51.312345   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .Close
	I1004 04:46:51.312521   76445 main.go:141] libmachine: Making call to close driver server
	I1004 04:46:51.312539   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .Close
	I1004 04:46:51.312753   76445 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:46:51.312804   76445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:46:51.312818   76445 main.go:141] libmachine: Making call to close driver server
	I1004 04:46:51.312827   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .Close
	I1004 04:46:51.312762   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | Closing plugin on server side
	I1004 04:46:51.312767   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | Closing plugin on server side
	I1004 04:46:51.312772   76445 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:46:51.312911   76445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:46:51.312920   76445 main.go:141] libmachine: Making call to close driver server
	I1004 04:46:51.312927   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .Close
	I1004 04:46:51.313362   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | Closing plugin on server side
	I1004 04:46:51.313365   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | Closing plugin on server side
	I1004 04:46:51.313366   76445 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:46:51.313380   76445 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:46:51.313389   76445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:46:51.313395   76445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:46:51.328811   76445 main.go:141] libmachine: Making call to close driver server
	I1004 04:46:51.328830   76445 main.go:141] libmachine: (custom-flannel-204413) Calling .Close
	I1004 04:46:51.329092   76445 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:46:51.329141   76445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:46:51.329127   76445 main.go:141] libmachine: (custom-flannel-204413) DBG | Closing plugin on server side
	I1004 04:46:51.330777   76445 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1004 04:46:50.503971   77883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:46:50.503996   77883 main.go:141] libmachine: Detecting the provisioner...
	I1004 04:46:50.504006   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHHostname
	I1004 04:46:50.507091   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.507594   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:50.507627   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.507853   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHPort
	I1004 04:46:50.508058   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:50.508245   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:50.508397   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHUsername
	I1004 04:46:50.508601   77883 main.go:141] libmachine: Using SSH client type: native
	I1004 04:46:50.508805   77883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.177 22 <nil> <nil>}
	I1004 04:46:50.508820   77883 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 04:46:50.620872   77883 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1004 04:46:50.620934   77883 main.go:141] libmachine: found compatible host: buildroot
	I1004 04:46:50.620943   77883 main.go:141] libmachine: Provisioning with buildroot...
	I1004 04:46:50.620952   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetMachineName
	I1004 04:46:50.621173   77883 buildroot.go:166] provisioning hostname "enable-default-cni-204413"
	I1004 04:46:50.621218   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetMachineName
	I1004 04:46:50.621419   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHHostname
	I1004 04:46:50.624684   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.625074   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:50.625103   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.625320   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHPort
	I1004 04:46:50.625539   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:50.625711   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:50.625887   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHUsername
	I1004 04:46:50.626165   77883 main.go:141] libmachine: Using SSH client type: native
	I1004 04:46:50.626382   77883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.177 22 <nil> <nil>}
	I1004 04:46:50.626400   77883 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-204413 && echo "enable-default-cni-204413" | sudo tee /etc/hostname
	I1004 04:46:50.751491   77883 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-204413
	
	I1004 04:46:50.751524   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHHostname
	I1004 04:46:50.754655   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.755024   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:50.755075   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.755243   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHPort
	I1004 04:46:50.755442   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:50.755630   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:50.755791   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHUsername
	I1004 04:46:50.755969   77883 main.go:141] libmachine: Using SSH client type: native
	I1004 04:46:50.756153   77883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.177 22 <nil> <nil>}
	I1004 04:46:50.756169   77883 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-204413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-204413/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-204413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:46:50.877647   77883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:46:50.877727   77883 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:46:50.877758   77883 buildroot.go:174] setting up certificates
	I1004 04:46:50.877770   77883 provision.go:84] configureAuth start
	I1004 04:46:50.877782   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetMachineName
	I1004 04:46:50.878060   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetIP
	I1004 04:46:50.881362   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.881824   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:50.881853   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.882022   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHHostname
	I1004 04:46:50.884532   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.885021   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:50.885053   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:50.885287   77883 provision.go:143] copyHostCerts
	I1004 04:46:50.885351   77883 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:46:50.885364   77883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:46:50.885434   77883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:46:50.885553   77883 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:46:50.885570   77883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:46:50.885620   77883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:46:50.885681   77883 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:46:50.885688   77883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:46:50.885706   77883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:46:50.885750   77883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-204413 san=[127.0.0.1 192.168.72.177 enable-default-cni-204413 localhost minikube]
	I1004 04:46:51.109875   77883 provision.go:177] copyRemoteCerts
	I1004 04:46:51.109957   77883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:46:51.109985   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHHostname
	I1004 04:46:51.113176   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.113509   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:51.113539   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.113729   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHPort
	I1004 04:46:51.113917   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:51.114071   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHUsername
	I1004 04:46:51.114188   77883 sshutil.go:53] new ssh client: &{IP:192.168.72.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/enable-default-cni-204413/id_rsa Username:docker}
	I1004 04:46:51.202533   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:46:51.229880   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1004 04:46:51.258655   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 04:46:51.288225   77883 provision.go:87] duration metric: took 410.443482ms to configureAuth
	I1004 04:46:51.288256   77883 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:46:51.288447   77883 config.go:182] Loaded profile config "enable-default-cni-204413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:46:51.288538   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHHostname
	I1004 04:46:51.291757   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.292150   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:51.292194   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.292377   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHPort
	I1004 04:46:51.292613   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:51.292796   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:51.292961   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHUsername
	I1004 04:46:51.293132   77883 main.go:141] libmachine: Using SSH client type: native
	I1004 04:46:51.293666   77883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.177 22 <nil> <nil>}
	I1004 04:46:51.293704   77883 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:46:51.537643   77883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:46:51.537678   77883 main.go:141] libmachine: Checking connection to Docker...
	I1004 04:46:51.537690   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetURL
	I1004 04:46:51.539047   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | Using libvirt version 6000000
	I1004 04:46:51.541652   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.542025   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:51.542056   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.542228   77883 main.go:141] libmachine: Docker is up and running!
	I1004 04:46:51.542246   77883 main.go:141] libmachine: Reticulating splines...
	I1004 04:46:51.542254   77883 client.go:171] duration metric: took 24.705461892s to LocalClient.Create
	I1004 04:46:51.542276   77883 start.go:167] duration metric: took 24.705524767s to libmachine.API.Create "enable-default-cni-204413"
	I1004 04:46:51.542290   77883 start.go:293] postStartSetup for "enable-default-cni-204413" (driver="kvm2")
	I1004 04:46:51.542304   77883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:46:51.542328   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .DriverName
	I1004 04:46:51.542556   77883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:46:51.542579   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHHostname
	I1004 04:46:51.545018   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.545382   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:51.545411   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.545523   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHPort
	I1004 04:46:51.545696   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:51.545824   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHUsername
	I1004 04:46:51.545947   77883 sshutil.go:53] new ssh client: &{IP:192.168.72.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/enable-default-cni-204413/id_rsa Username:docker}
	I1004 04:46:51.626378   77883 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:46:51.631417   77883 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:46:51.631446   77883 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:46:51.631521   77883 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:46:51.631632   77883 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:46:51.631762   77883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:46:51.644435   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:46:51.677254   77883 start.go:296] duration metric: took 134.949016ms for postStartSetup
	I1004 04:46:51.677329   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetConfigRaw
	I1004 04:46:51.678025   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetIP
	I1004 04:46:51.681134   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.681515   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:51.681551   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.681847   77883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/config.json ...
	I1004 04:46:51.682055   77883 start.go:128] duration metric: took 24.868970556s to createHost
	I1004 04:46:51.682085   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHHostname
	I1004 04:46:51.684728   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.685074   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:51.685138   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.685241   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHPort
	I1004 04:46:51.685447   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:51.685638   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:51.685787   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHUsername
	I1004 04:46:51.685965   77883 main.go:141] libmachine: Using SSH client type: native
	I1004 04:46:51.686176   77883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.177 22 <nil> <nil>}
	I1004 04:46:51.686193   77883 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:46:51.797899   77883 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728017211.770203966
	
	I1004 04:46:51.797928   77883 fix.go:216] guest clock: 1728017211.770203966
	I1004 04:46:51.797939   77883 fix.go:229] Guest: 2024-10-04 04:46:51.770203966 +0000 UTC Remote: 2024-10-04 04:46:51.68206979 +0000 UTC m=+36.233074345 (delta=88.134176ms)
	I1004 04:46:51.797986   77883 fix.go:200] guest clock delta is within tolerance: 88.134176ms
	I1004 04:46:51.797998   77883 start.go:83] releasing machines lock for "enable-default-cni-204413", held for 24.985141283s
	I1004 04:46:51.798030   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .DriverName
	I1004 04:46:51.798327   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetIP
	I1004 04:46:51.801393   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.801836   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:51.801876   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.802194   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .DriverName
	I1004 04:46:51.802750   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .DriverName
	I1004 04:46:51.802912   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .DriverName
	I1004 04:46:51.803012   77883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:46:51.803065   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHHostname
	I1004 04:46:51.803128   77883 ssh_runner.go:195] Run: cat /version.json
	I1004 04:46:51.803154   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHHostname
	I1004 04:46:51.806080   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.806401   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.806444   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:51.806468   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.806595   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHPort
	I1004 04:46:51.806786   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:51.806934   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:51.806948   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHUsername
	I1004 04:46:51.806990   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:51.807166   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHPort
	I1004 04:46:51.807199   77883 sshutil.go:53] new ssh client: &{IP:192.168.72.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/enable-default-cni-204413/id_rsa Username:docker}
	I1004 04:46:51.807303   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHKeyPath
	I1004 04:46:51.807467   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetSSHUsername
	I1004 04:46:51.807617   77883 sshutil.go:53] new ssh client: &{IP:192.168.72.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/enable-default-cni-204413/id_rsa Username:docker}
	I1004 04:46:51.886014   77883 ssh_runner.go:195] Run: systemctl --version
	I1004 04:46:51.912041   77883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:46:52.082834   77883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:46:52.089859   77883 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:46:52.089936   77883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:46:52.111207   77883 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:46:52.111230   77883 start.go:495] detecting cgroup driver to use...
	I1004 04:46:52.111291   77883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:46:52.133675   77883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:46:52.148262   77883 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:46:52.148329   77883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:46:52.167357   77883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:46:52.185471   77883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:46:52.351704   77883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:46:52.528158   77883 docker.go:233] disabling docker service ...
	I1004 04:46:52.528224   77883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:46:52.542558   77883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:46:52.556830   77883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:46:52.687400   77883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:46:52.832705   77883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:46:52.846668   77883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:46:52.866992   77883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:46:52.867068   77883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:46:52.878230   77883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:46:52.878307   77883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:46:52.889008   77883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:46:52.900622   77883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:46:52.912654   77883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:46:52.924406   77883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:46:52.936022   77883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:46:52.953316   77883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:46:52.964152   77883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:46:52.974081   77883 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:46:52.974138   77883 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:46:52.987024   77883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:46:52.997282   77883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:46:53.122400   77883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:46:53.224008   77883 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:46:53.224083   77883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:46:53.229424   77883 start.go:563] Will wait 60s for crictl version
	I1004 04:46:53.229485   77883 ssh_runner.go:195] Run: which crictl
	I1004 04:46:53.233241   77883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:46:53.273842   77883 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:46:53.273929   77883 ssh_runner.go:195] Run: crio --version
	I1004 04:46:53.304069   77883 ssh_runner.go:195] Run: crio --version
	I1004 04:46:53.337140   77883 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:46:53.338302   77883 main.go:141] libmachine: (enable-default-cni-204413) Calling .GetIP
	I1004 04:46:53.340936   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:53.341262   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:cf:c2", ip: ""} in network mk-enable-default-cni-204413: {Iface:virbr3 ExpiryTime:2024-10-04 05:46:43 +0000 UTC Type:0 Mac:52:54:00:ba:cf:c2 Iaid: IPaddr:192.168.72.177 Prefix:24 Hostname:enable-default-cni-204413 Clientid:01:52:54:00:ba:cf:c2}
	I1004 04:46:53.341308   77883 main.go:141] libmachine: (enable-default-cni-204413) DBG | domain enable-default-cni-204413 has defined IP address 192.168.72.177 and MAC address 52:54:00:ba:cf:c2 in network mk-enable-default-cni-204413
	I1004 04:46:53.341537   77883 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1004 04:46:53.345944   77883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:46:53.358685   77883 kubeadm.go:883] updating cluster {Name:enable-default-cni-204413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:enable-default-cni-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.177 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:46:53.358789   77883 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:46:53.358863   77883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:46:53.392437   77883 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:46:53.392496   77883 ssh_runner.go:195] Run: which lz4
	I1004 04:46:53.396566   77883 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:46:53.400727   77883 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:46:53.400753   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:46:54.801126   77883 crio.go:462] duration metric: took 1.404590716s to copy over tarball
	I1004 04:46:54.801205   77883 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:46:51.331911   76445 addons.go:510] duration metric: took 1.143431459s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1004 04:46:51.479265   76445 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-204413" context rescaled to 1 replicas
	I1004 04:46:52.598547   76445 node_ready.go:53] node "custom-flannel-204413" has status "Ready":"False"
	I1004 04:46:55.099075   76445 node_ready.go:53] node "custom-flannel-204413" has status "Ready":"False"
	I1004 04:46:57.060757   77883 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.259518962s)
	I1004 04:46:57.060789   77883 crio.go:469] duration metric: took 2.259637066s to extract the tarball
	I1004 04:46:57.060799   77883 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:46:57.098984   77883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:46:57.142855   77883 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:46:57.142885   77883 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:46:57.142895   77883 kubeadm.go:934] updating node { 192.168.72.177 8443 v1.31.1 crio true true} ...
	I1004 04:46:57.143009   77883 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-204413 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1004 04:46:57.143081   77883 ssh_runner.go:195] Run: crio config
	I1004 04:46:57.192967   77883 cni.go:84] Creating CNI manager for "bridge"
	I1004 04:46:57.192988   77883 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:46:57.193008   77883 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.177 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-204413 NodeName:enable-default-cni-204413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:46:57.193130   77883 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-204413"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:46:57.193186   77883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:46:57.204356   77883 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:46:57.204437   77883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:46:57.214982   77883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1004 04:46:57.233606   77883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:46:57.252024   77883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I1004 04:46:57.270176   77883 ssh_runner.go:195] Run: grep 192.168.72.177	control-plane.minikube.internal$ /etc/hosts
	I1004 04:46:57.274281   77883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:46:57.287192   77883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:46:57.400886   77883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:46:57.418767   77883 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413 for IP: 192.168.72.177
	I1004 04:46:57.418805   77883 certs.go:194] generating shared ca certs ...
	I1004 04:46:57.418830   77883 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:57.419029   77883 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:46:57.419112   77883 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:46:57.419131   77883 certs.go:256] generating profile certs ...
	I1004 04:46:57.419238   77883 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/client.key
	I1004 04:46:57.419268   77883 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/client.crt with IP's: []
	I1004 04:46:57.579901   77883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/client.crt ...
	I1004 04:46:57.579933   77883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/client.crt: {Name:mk6a271af25cfa4b4477c43c7d19b30d2a88f298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:57.580106   77883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/client.key ...
	I1004 04:46:57.580121   77883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/client.key: {Name:mk69167c20b3b79d4c49d12375d159aa67140f6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:57.580208   77883 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/apiserver.key.aa2a4815
	I1004 04:46:57.580227   77883 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/apiserver.crt.aa2a4815 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.177]
	I1004 04:46:58.458722   77883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/apiserver.crt.aa2a4815 ...
	I1004 04:46:58.458759   77883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/apiserver.crt.aa2a4815: {Name:mkf207e962c5a04dd289f9d2452c5e6c20586157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:58.458914   77883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/apiserver.key.aa2a4815 ...
	I1004 04:46:58.458928   77883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/apiserver.key.aa2a4815: {Name:mk3d6ef0577688e8f0a1ed7e14f0a8f62a5d96b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:58.459034   77883 certs.go:381] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/apiserver.crt.aa2a4815 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/apiserver.crt
	I1004 04:46:58.459125   77883 certs.go:385] copying /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/apiserver.key.aa2a4815 -> /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/apiserver.key
	I1004 04:46:58.459188   77883 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/proxy-client.key
	I1004 04:46:58.459218   77883 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/proxy-client.crt with IP's: []
	I1004 04:46:58.523199   77883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/proxy-client.crt ...
	I1004 04:46:58.523227   77883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/proxy-client.crt: {Name:mk09cdd1f5bc8645ffd8cdf245bc1680e6cb297c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:58.523397   77883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/proxy-client.key ...
	I1004 04:46:58.523413   77883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/proxy-client.key: {Name:mka878934e65a82163edb9f16be5677ef9c5d1af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:46:58.523611   77883 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:46:58.523660   77883 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:46:58.523675   77883 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:46:58.523707   77883 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:46:58.523741   77883 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:46:58.523774   77883 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:46:58.523848   77883 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:46:58.524517   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:46:58.568034   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:46:58.597685   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:46:58.624117   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:46:58.655305   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1004 04:46:58.683886   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:46:58.800188   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:46:58.826466   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/enable-default-cni-204413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 04:46:58.852639   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:46:58.876757   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:46:58.901873   77883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:46:58.926740   77883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:46:58.943935   77883 ssh_runner.go:195] Run: openssl version
	I1004 04:46:58.949864   77883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:46:58.960915   77883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:46:58.965418   77883 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:46:58.965476   77883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:46:58.971815   77883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:46:58.982556   77883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:46:58.993190   77883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:46:58.997905   77883 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:46:58.997970   77883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:46:59.004306   77883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:46:59.018603   77883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:46:59.046809   77883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:46:59.056305   77883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:46:59.056369   77883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:46:59.063109   77883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:46:59.082612   77883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:46:59.087066   77883 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 04:46:59.087116   77883 kubeadm.go:392] StartCluster: {Name:enable-default-cni-204413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.31.1 ClusterName:enable-default-cni-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.177 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:46:59.087188   77883 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:46:59.087225   77883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:46:59.130668   77883 cri.go:89] found id: ""
	I1004 04:46:59.130725   77883 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:46:59.141393   77883 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:46:59.152733   77883 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:46:59.163635   77883 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:46:59.163653   77883 kubeadm.go:157] found existing configuration files:
	
	I1004 04:46:59.163699   77883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:46:59.173721   77883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:46:59.173774   77883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:46:59.184163   77883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:46:59.194746   77883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:46:59.194802   77883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:46:59.206665   77883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:46:59.219380   77883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:46:59.219441   77883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:46:59.232160   77883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:46:59.242341   77883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:46:59.242392   77883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:46:59.253409   77883 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:46:59.310581   77883 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 04:46:59.310638   77883 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:46:59.424664   77883 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:46:59.424835   77883 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:46:59.424976   77883 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 04:46:59.436553   77883 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:46:59.609003   77883 out.go:235]   - Generating certificates and keys ...
	I1004 04:46:59.609110   77883 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:46:59.609209   77883 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:46:59.609317   77883 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 04:46:59.781716   77883 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1004 04:46:59.888600   77883 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1004 04:46:59.999891   77883 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1004 04:47:00.270553   77883 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1004 04:47:00.270880   77883 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-204413 localhost] and IPs [192.168.72.177 127.0.0.1 ::1]
	I1004 04:47:00.419727   77883 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1004 04:47:00.419952   77883 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-204413 localhost] and IPs [192.168.72.177 127.0.0.1 ::1]
	I1004 04:46:57.598502   76445 node_ready.go:53] node "custom-flannel-204413" has status "Ready":"False"
	I1004 04:47:00.098338   76445 node_ready.go:53] node "custom-flannel-204413" has status "Ready":"False"
	
	
	==> CRI-O <==
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.088469600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017222088439889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a12cbbf-daf3-47f4-a4df-ff9c187ead25 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.089162707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70c136f6-cbe2-4c40-a87d-2ff07a34319a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.089259284Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70c136f6-cbe2-4c40-a87d-2ff07a34319a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.089625560Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015897990347963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858a12b10e9638b4d7e2414bd17ddd89695f92cd0560c6772c3d4fc7b17fa26d,PodSandboxId:1ca0b9c5830865ffa646cc1fac486a97d0641139b4387c581e4aa8b99d73698b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015886290459896,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bf12a9c-f04f-41fe-803a-88cc8e2e2219,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0,PodSandboxId:1274099de2596d78e48eae95734d2841408b0c9aef23d6d3b582a8b17be116ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015882862374539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wz6rd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6936a096-4173-4f58-aa65-001ea438e3a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754,PodSandboxId:608f4b5a81f87489b980f3b3b5fa78db9962da2f9569dde6cdc52d80ff6e08ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015867217268118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nnld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e045721-1
f51-44cd-afc7-acf8e4ce6845,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015867184182408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0
-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40,PodSandboxId:ee10ec4f78c5cb361eee404342a9d09c04940c3425b462e1ed9acf1d31f94d7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015863471854467,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0eeecb53a740d563832e2d4d843fd7f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0,PodSandboxId:f58ec2a6750cd14e1558ac87b3a64744b91ae9450da547d45fcbd7c8dcf68029,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015863419449097,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0899f001d25e977da46f3ca1
dadae4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3,PodSandboxId:ad5a7fcc3f358b82422e4de43966947e20e13eddc68e23b5a7699b1c92287df8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015863438032752,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347bb00aa2cf3b8a269f04ff13dd
6a5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500,PodSandboxId:eed40da104c75b93cb67d7aa80dc3d84e25c2f1d8be9a48e13e54058652b600c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015863374865664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f77da40e3c60520ed0857cb3dfa36e
06,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70c136f6-cbe2-4c40-a87d-2ff07a34319a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.137958948Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12c20c2f-a971-4799-86aa-1f431d84b2e0 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.138076424Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12c20c2f-a971-4799-86aa-1f431d84b2e0 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.139958143Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ecbe168a-2c6e-47f3-a3dc-5902db636eef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.140524462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017222140492279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecbe168a-2c6e-47f3-a3dc-5902db636eef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.141200347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d557fa7-a583-438e-9064-04368f79ae6e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.141299100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d557fa7-a583-438e-9064-04368f79ae6e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.141635232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015897990347963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858a12b10e9638b4d7e2414bd17ddd89695f92cd0560c6772c3d4fc7b17fa26d,PodSandboxId:1ca0b9c5830865ffa646cc1fac486a97d0641139b4387c581e4aa8b99d73698b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015886290459896,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bf12a9c-f04f-41fe-803a-88cc8e2e2219,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0,PodSandboxId:1274099de2596d78e48eae95734d2841408b0c9aef23d6d3b582a8b17be116ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015882862374539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wz6rd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6936a096-4173-4f58-aa65-001ea438e3a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754,PodSandboxId:608f4b5a81f87489b980f3b3b5fa78db9962da2f9569dde6cdc52d80ff6e08ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015867217268118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nnld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e045721-1
f51-44cd-afc7-acf8e4ce6845,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015867184182408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0
-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40,PodSandboxId:ee10ec4f78c5cb361eee404342a9d09c04940c3425b462e1ed9acf1d31f94d7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015863471854467,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0eeecb53a740d563832e2d4d843fd7f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0,PodSandboxId:f58ec2a6750cd14e1558ac87b3a64744b91ae9450da547d45fcbd7c8dcf68029,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015863419449097,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0899f001d25e977da46f3ca1
dadae4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3,PodSandboxId:ad5a7fcc3f358b82422e4de43966947e20e13eddc68e23b5a7699b1c92287df8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015863438032752,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347bb00aa2cf3b8a269f04ff13dd
6a5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500,PodSandboxId:eed40da104c75b93cb67d7aa80dc3d84e25c2f1d8be9a48e13e54058652b600c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015863374865664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f77da40e3c60520ed0857cb3dfa36e
06,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d557fa7-a583-438e-9064-04368f79ae6e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.187242318Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2de5c77e-a258-4c70-beff-1f97569d8237 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.187358053Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2de5c77e-a258-4c70-beff-1f97569d8237 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.189039200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2fecdab-57ca-48d7-a87c-950124e04d8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.189676076Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017222189647983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2fecdab-57ca-48d7-a87c-950124e04d8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.190237527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2505db7-fd1f-40bd-b27d-fea9477bbb1b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.190332856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2505db7-fd1f-40bd-b27d-fea9477bbb1b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.190656590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015897990347963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858a12b10e9638b4d7e2414bd17ddd89695f92cd0560c6772c3d4fc7b17fa26d,PodSandboxId:1ca0b9c5830865ffa646cc1fac486a97d0641139b4387c581e4aa8b99d73698b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015886290459896,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bf12a9c-f04f-41fe-803a-88cc8e2e2219,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0,PodSandboxId:1274099de2596d78e48eae95734d2841408b0c9aef23d6d3b582a8b17be116ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015882862374539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wz6rd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6936a096-4173-4f58-aa65-001ea438e3a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754,PodSandboxId:608f4b5a81f87489b980f3b3b5fa78db9962da2f9569dde6cdc52d80ff6e08ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015867217268118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nnld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e045721-1
f51-44cd-afc7-acf8e4ce6845,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015867184182408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0
-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40,PodSandboxId:ee10ec4f78c5cb361eee404342a9d09c04940c3425b462e1ed9acf1d31f94d7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015863471854467,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0eeecb53a740d563832e2d4d843fd7f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0,PodSandboxId:f58ec2a6750cd14e1558ac87b3a64744b91ae9450da547d45fcbd7c8dcf68029,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015863419449097,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0899f001d25e977da46f3ca1
dadae4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3,PodSandboxId:ad5a7fcc3f358b82422e4de43966947e20e13eddc68e23b5a7699b1c92287df8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015863438032752,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347bb00aa2cf3b8a269f04ff13dd
6a5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500,PodSandboxId:eed40da104c75b93cb67d7aa80dc3d84e25c2f1d8be9a48e13e54058652b600c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015863374865664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f77da40e3c60520ed0857cb3dfa36e
06,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2505db7-fd1f-40bd-b27d-fea9477bbb1b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.238164307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54c85c43-bf35-4ea3-9cac-8b7ad9fb1fc5 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.238300109Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54c85c43-bf35-4ea3-9cac-8b7ad9fb1fc5 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.240966975Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cbf87b9f-8e1d-464c-a590-653afd3a8707 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.241600902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017222241490746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbf87b9f-8e1d-464c-a590-653afd3a8707 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.242737221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f00d8e9-9088-483d-ac63-a7bbc5f7c0af name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.242833725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f00d8e9-9088-483d-ac63-a7bbc5f7c0af name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:47:02 default-k8s-diff-port-281471 crio[702]: time="2024-10-04 04:47:02.243108988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015897990347963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858a12b10e9638b4d7e2414bd17ddd89695f92cd0560c6772c3d4fc7b17fa26d,PodSandboxId:1ca0b9c5830865ffa646cc1fac486a97d0641139b4387c581e4aa8b99d73698b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015886290459896,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5bf12a9c-f04f-41fe-803a-88cc8e2e2219,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0,PodSandboxId:1274099de2596d78e48eae95734d2841408b0c9aef23d6d3b582a8b17be116ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015882862374539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wz6rd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6936a096-4173-4f58-aa65-001ea438e3a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754,PodSandboxId:608f4b5a81f87489b980f3b3b5fa78db9962da2f9569dde6cdc52d80ff6e08ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015867217268118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nnld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e045721-1
f51-44cd-afc7-acf8e4ce6845,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641,PodSandboxId:3f576cb1d451bdc93e36ffa85a35a3ae115568d6e622f85d6d37551fa16992e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015867184182408,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b644e87c-505e-44c0-b0a0
-e07df97f5f51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40,PodSandboxId:ee10ec4f78c5cb361eee404342a9d09c04940c3425b462e1ed9acf1d31f94d7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015863471854467,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 0eeecb53a740d563832e2d4d843fd7f2,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0,PodSandboxId:f58ec2a6750cd14e1558ac87b3a64744b91ae9450da547d45fcbd7c8dcf68029,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015863419449097,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f0899f001d25e977da46f3ca1
dadae4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3,PodSandboxId:ad5a7fcc3f358b82422e4de43966947e20e13eddc68e23b5a7699b1c92287df8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015863438032752,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347bb00aa2cf3b8a269f04ff13dd
6a5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500,PodSandboxId:eed40da104c75b93cb67d7aa80dc3d84e25c2f1d8be9a48e13e54058652b600c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015863374865664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281471,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f77da40e3c60520ed0857cb3dfa36e
06,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f00d8e9-9088-483d-ac63-a7bbc5f7c0af name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ec898e33ba398       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       2                   3f576cb1d451b       storage-provisioner
	858a12b10e963       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   1ca0b9c583086       busybox
	7c6d3555bccdd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      22 minutes ago      Running             coredns                   1                   1274099de2596       coredns-7c65d6cfc9-wz6rd
	387473e4357dc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      22 minutes ago      Running             kube-proxy                1                   608f4b5a81f87       kube-proxy-4nnld
	d2d04e275366a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   3f576cb1d451b       storage-provisioner
	d889ba1109ff2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      22 minutes ago      Running             kube-controller-manager   1                   ee10ec4f78c5c       kube-controller-manager-default-k8s-diff-port-281471
	59f9dd635170a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      22 minutes ago      Running             kube-scheduler            1                   ad5a7fcc3f358       kube-scheduler-default-k8s-diff-port-281471
	fe3375782091c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      22 minutes ago      Running             etcd                      1                   f58ec2a6750cd       etcd-default-k8s-diff-port-281471
	8e5ab1b72e413       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      22 minutes ago      Running             kube-apiserver            1                   eed40da104c75       kube-apiserver-default-k8s-diff-port-281471
	
	
	==> coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33238 - 32232 "HINFO IN 6507743067045154330.9083972573469339683. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014209041s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-281471
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-281471
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=default-k8s-diff-port-281471
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T04_18_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 04:18:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-281471
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 04:46:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 04:45:22 +0000   Fri, 04 Oct 2024 04:18:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 04:45:22 +0000   Fri, 04 Oct 2024 04:18:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 04:45:22 +0000   Fri, 04 Oct 2024 04:18:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 04:45:22 +0000   Fri, 04 Oct 2024 04:24:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    default-k8s-diff-port-281471
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c1ffe0bbcc447ad9a342c41ec9f8913
	  System UUID:                5c1ffe0b-bcc4-47ad-9a34-2c41ec9f8913
	  Boot ID:                    62a49ed2-5300-43d2-afd5-efe7c53cf70c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-wz6rd                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-default-k8s-diff-port-281471                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-default-k8s-diff-port-281471             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-281471    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-4nnld                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-281471             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-f6qhr                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-281471 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-281471 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-281471 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-281471 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-281471 event: Registered Node default-k8s-diff-port-281471 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-281471 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-281471 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-281471 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-281471 event: Registered Node default-k8s-diff-port-281471 in Controller
	
	
	==> dmesg <==
	[Oct 4 04:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055222] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040588] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct 4 04:24] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.553627] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.602347] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.976974] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.059998] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067965] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.188511] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.148379] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.306488] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[  +4.495235] systemd-fstab-generator[788]: Ignoring "noauto" option for root device
	[  +0.063166] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.005662] systemd-fstab-generator[908]: Ignoring "noauto" option for root device
	[  +4.661682] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.920022] systemd-fstab-generator[1547]: Ignoring "noauto" option for root device
	[  +4.757043] kauditd_printk_skb: 64 callbacks suppressed
	[  +7.792493] kauditd_printk_skb: 11 callbacks suppressed
	[ +15.456146] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] <==
	{"level":"info","ts":"2024-10-04T04:45:08.275014Z","caller":"traceutil/trace.go:171","msg":"trace[514548542] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1599; }","duration":"125.640601ms","start":"2024-10-04T04:45:08.149363Z","end":"2024-10-04T04:45:08.275004Z","steps":["trace[514548542] 'agreement among raft nodes before linearized reading'  (duration: 125.534289ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T04:45:08.275078Z","caller":"traceutil/trace.go:171","msg":"trace[1810350584] transaction","detail":"{read_only:false; response_revision:1599; number_of_response:1; }","duration":"151.868094ms","start":"2024-10-04T04:45:08.123188Z","end":"2024-10-04T04:45:08.275056Z","steps":["trace[1810350584] 'process raft request'  (duration: 88.482846ms)","trace[1810350584] 'compare'  (duration: 63.058994ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T04:45:08.469003Z","caller":"traceutil/trace.go:171","msg":"trace[218814090] linearizableReadLoop","detail":"{readStateIndex:1893; appliedIndex:1892; }","duration":"188.340374ms","start":"2024-10-04T04:45:08.280643Z","end":"2024-10-04T04:45:08.468983Z","steps":["trace[218814090] 'read index received'  (duration: 107.551167ms)","trace[218814090] 'applied index is now lower than readState.Index'  (duration: 80.788394ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T04:45:08.469500Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.836069ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:476"}
	{"level":"info","ts":"2024-10-04T04:45:08.470284Z","caller":"traceutil/trace.go:171","msg":"trace[443802934] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:1600; }","duration":"189.629317ms","start":"2024-10-04T04:45:08.280640Z","end":"2024-10-04T04:45:08.470269Z","steps":["trace[443802934] 'agreement among raft nodes before linearized reading'  (duration: 188.66386ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T04:45:08.469627Z","caller":"traceutil/trace.go:171","msg":"trace[1442808621] transaction","detail":"{read_only:false; response_revision:1600; number_of_response:1; }","duration":"190.286863ms","start":"2024-10-04T04:45:08.279324Z","end":"2024-10-04T04:45:08.469611Z","steps":["trace[1442808621] 'process raft request'  (duration: 108.915363ms)","trace[1442808621] 'compare'  (duration: 80.596329ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-04T04:45:46.956339Z","caller":"traceutil/trace.go:171","msg":"trace[1373129171] linearizableReadLoop","detail":"{readStateIndex:1932; appliedIndex:1931; }","duration":"192.990565ms","start":"2024-10-04T04:45:46.763321Z","end":"2024-10-04T04:45:46.956311Z","steps":["trace[1373129171] 'read index received'  (duration: 192.842179ms)","trace[1373129171] 'applied index is now lower than readState.Index'  (duration: 147.97µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T04:45:46.956675Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.34908ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T04:45:46.956816Z","caller":"traceutil/trace.go:171","msg":"trace[1904757054] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1632; }","duration":"193.51341ms","start":"2024-10-04T04:45:46.763295Z","end":"2024-10-04T04:45:46.956808Z","steps":["trace[1904757054] 'agreement among raft nodes before linearized reading'  (duration: 193.328622ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T04:45:46.956674Z","caller":"traceutil/trace.go:171","msg":"trace[186447149] transaction","detail":"{read_only:false; response_revision:1632; number_of_response:1; }","duration":"267.059838ms","start":"2024-10-04T04:45:46.689589Z","end":"2024-10-04T04:45:46.956649Z","steps":["trace[186447149] 'process raft request'  (duration: 266.610994ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T04:46:08.450756Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.466617ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10689173137671621868 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.201\" mod_revision:1642 > success:<request_put:<key:\"/registry/masterleases/192.168.39.201\" value_size:68 lease:1465801100816846058 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.201\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-04T04:46:08.451013Z","caller":"traceutil/trace.go:171","msg":"trace[1511696820] transaction","detail":"{read_only:false; response_revision:1650; number_of_response:1; }","duration":"258.160671ms","start":"2024-10-04T04:46:08.192831Z","end":"2024-10-04T04:46:08.450992Z","steps":["trace[1511696820] 'process raft request'  (duration: 126.771291ms)","trace[1511696820] 'compare'  (duration: 130.369646ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T04:46:33.815611Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.364867ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T04:46:33.816524Z","caller":"traceutil/trace.go:171","msg":"trace[269740853] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1670; }","duration":"178.62559ms","start":"2024-10-04T04:46:33.637107Z","end":"2024-10-04T04:46:33.815733Z","steps":["trace[269740853] 'range keys from in-memory index tree'  (duration: 178.352229ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T04:46:33.963141Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.710267ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10689173137671622020 > lease_revoke:<id:14579255c5908d25>","response":"size:27"}
	{"level":"info","ts":"2024-10-04T04:46:33.963359Z","caller":"traceutil/trace.go:171","msg":"trace[672245650] linearizableReadLoop","detail":"{readStateIndex:1980; appliedIndex:1979; }","duration":"328.27205ms","start":"2024-10-04T04:46:33.635031Z","end":"2024-10-04T04:46:33.963303Z","steps":["trace[672245650] 'read index received'  (duration: 124.336549ms)","trace[672245650] 'applied index is now lower than readState.Index'  (duration: 203.933874ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-04T04:46:33.963665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"328.623403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-10-04T04:46:33.963976Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.14059ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T04:46:33.964030Z","caller":"traceutil/trace.go:171","msg":"trace[1095848917] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1670; }","duration":"147.187067ms","start":"2024-10-04T04:46:33.816829Z","end":"2024-10-04T04:46:33.964016Z","steps":["trace[1095848917] 'agreement among raft nodes before linearized reading'  (duration: 147.133626ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T04:46:33.964000Z","caller":"traceutil/trace.go:171","msg":"trace[1780780712] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1670; }","duration":"328.965678ms","start":"2024-10-04T04:46:33.635019Z","end":"2024-10-04T04:46:33.963985Z","steps":["trace[1780780712] 'agreement among raft nodes before linearized reading'  (duration: 328.474781ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T04:46:33.963936Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.847177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-04T04:46:33.964510Z","caller":"traceutil/trace.go:171","msg":"trace[1345736683] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1670; }","duration":"204.427956ms","start":"2024-10-04T04:46:33.760070Z","end":"2024-10-04T04:46:33.964498Z","steps":["trace[1345736683] 'agreement among raft nodes before linearized reading'  (duration: 203.799433ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-04T04:46:33.964589Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-04T04:46:33.634977Z","time spent":"329.505291ms","remote":"127.0.0.1:54334","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-10-04T04:46:35.497503Z","caller":"traceutil/trace.go:171","msg":"trace[590101162] transaction","detail":"{read_only:false; response_revision:1672; number_of_response:1; }","duration":"263.587409ms","start":"2024-10-04T04:46:35.233895Z","end":"2024-10-04T04:46:35.497483Z","steps":["trace[590101162] 'process raft request'  (duration: 263.441753ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-04T04:46:59.733272Z","caller":"traceutil/trace.go:171","msg":"trace[1734443569] transaction","detail":"{read_only:false; response_revision:1691; number_of_response:1; }","duration":"111.149351ms","start":"2024-10-04T04:46:59.622085Z","end":"2024-10-04T04:46:59.733234Z","steps":["trace[1734443569] 'process raft request'  (duration: 110.705649ms)"],"step_count":1}
	
	
	==> kernel <==
	 04:47:02 up 23 min,  0 users,  load average: 0.59, 0.25, 0.14
	Linux default-k8s-diff-port-281471 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] <==
	I1004 04:42:27.718356       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:42:27.718418       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 04:44:26.716761       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:44:26.716884       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1004 04:44:27.719264       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:44:27.719324       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1004 04:44:27.719388       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:44:27.719443       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1004 04:44:27.720466       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:44:27.720571       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 04:45:27.721123       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:45:27.721206       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1004 04:45:27.721145       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:45:27.721275       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1004 04:45:27.722480       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:45:27.722572       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] <==
	E1004 04:42:00.306318       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:42:00.788359       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:42:30.312446       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:42:30.795731       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:43:00.319066       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:43:00.804431       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:43:30.325298       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:43:30.812663       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:44:00.331736       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:44:00.820601       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:44:30.337833       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:44:30.828174       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:45:00.344435       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:45:00.838031       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 04:45:22.369996       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-281471"
	E1004 04:45:30.351982       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:45:30.845716       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 04:45:35.769869       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="184.869µs"
	I1004 04:45:48.772175       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="232.513µs"
	E1004 04:46:00.359190       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:46:00.855190       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:46:30.368481       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:46:30.865492       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:47:00.375605       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:47:00.876752       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 04:24:27.410456       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 04:24:27.422824       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.201"]
	E1004 04:24:27.423036       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 04:24:27.458963       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 04:24:27.459004       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 04:24:27.459035       1 server_linux.go:169] "Using iptables Proxier"
	I1004 04:24:27.461860       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 04:24:27.462358       1 server.go:483] "Version info" version="v1.31.1"
	I1004 04:24:27.462633       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:24:27.463878       1 config.go:199] "Starting service config controller"
	I1004 04:24:27.463938       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 04:24:27.463996       1 config.go:105] "Starting endpoint slice config controller"
	I1004 04:24:27.464019       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 04:24:27.465775       1 config.go:328] "Starting node config controller"
	I1004 04:24:27.466702       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 04:24:27.564086       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 04:24:27.564231       1 shared_informer.go:320] Caches are synced for service config
	I1004 04:24:27.567516       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] <==
	I1004 04:24:24.757582       1 serving.go:386] Generated self-signed cert in-memory
	W1004 04:24:26.684632       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 04:24:26.684783       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 04:24:26.684873       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 04:24:26.684901       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 04:24:26.728749       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1004 04:24:26.728855       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:24:26.731402       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 04:24:26.731498       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 04:24:26.732194       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1004 04:24:26.732287       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1004 04:24:26.837399       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 04:45:53 default-k8s-diff-port-281471 kubelet[915]: E1004 04:45:53.067405     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017153065942233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:46:02 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:02.754699     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-f6qhr" podUID="46c2870a-41a6-46a1-bbbd-f38f2e266873"
	Oct 04 04:46:03 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:03.069794     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017163069081609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:46:03 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:03.070193     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017163069081609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:46:13 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:13.072198     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017173071411084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:46:13 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:13.072242     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017173071411084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:46:14 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:14.756209     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-f6qhr" podUID="46c2870a-41a6-46a1-bbbd-f38f2e266873"
	Oct 04 04:46:22 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:22.783074     915 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 04:46:22 default-k8s-diff-port-281471 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 04:46:22 default-k8s-diff-port-281471 kubelet[915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 04:46:22 default-k8s-diff-port-281471 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 04:46:22 default-k8s-diff-port-281471 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 04:46:23 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:23.074230     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017183073303071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:46:23 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:23.074287     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017183073303071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:46:28 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:28.754969     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-f6qhr" podUID="46c2870a-41a6-46a1-bbbd-f38f2e266873"
	Oct 04 04:46:33 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:33.077172     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017193076074298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:46:33 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:33.077525     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017193076074298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:46:39 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:39.753473     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-f6qhr" podUID="46c2870a-41a6-46a1-bbbd-f38f2e266873"
	Oct 04 04:46:43 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:43.080321     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017203078968745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:46:43 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:43.081999     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017203078968745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:46:53 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:53.083954     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017213083079753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:46:53 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:53.084293     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017213083079753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:46:54 default-k8s-diff-port-281471 kubelet[915]: E1004 04:46:54.753649     915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-f6qhr" podUID="46c2870a-41a6-46a1-bbbd-f38f2e266873"
	Oct 04 04:47:03 default-k8s-diff-port-281471 kubelet[915]: E1004 04:47:03.086026     915 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017223085399992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:47:03 default-k8s-diff-port-281471 kubelet[915]: E1004 04:47:03.086100     915 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017223085399992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] <==
	I1004 04:24:27.307080       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1004 04:24:57.314392       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] <==
	I1004 04:24:58.102434       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 04:24:58.114499       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 04:24:58.114743       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 04:24:58.132784       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 04:24:58.133796       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-281471_4d9956fa-531e-46b7-9e36-b11659f8607e!
	I1004 04:24:58.133466       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f432bad1-b1f6-4130-b9f1-8e2b00dd53a4", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-281471_4d9956fa-531e-46b7-9e36-b11659f8607e became leader
	I1004 04:24:58.234597       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-281471_4d9956fa-531e-46b7-9e36-b11659f8607e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-281471 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-f6qhr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-281471 describe pod metrics-server-6867b74b74-f6qhr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-281471 describe pod metrics-server-6867b74b74-f6qhr: exit status 1 (80.451531ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-f6qhr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-281471 describe pod metrics-server-6867b74b74-f6qhr: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (543.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (344.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-658545 -n no-preload-658545
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-04 04:44:10.959083227 +0000 UTC m=+6969.892023781
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-658545 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-658545 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.505µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-658545 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658545 -n no-preload-658545
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-658545 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-658545 logs -n 25: (1.249552593s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-617497             | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617497                  | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617497 --memory=2200 --alsologtostderr   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-617497 image list                           | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:18 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658545                  | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281471  | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-420062        | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-934812                 | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:19 UTC | 04 Oct 24 04:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-420062             | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281471       | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC | 04 Oct 24 04:28 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:44 UTC | 04 Oct 24 04:44 UTC |
	| start   | -p auto-204413 --memory=3072                           | auto-204413                  | jenkins | v1.34.0 | 04 Oct 24 04:44 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 04:44:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 04:44:08.668851   73561 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:44:08.668943   73561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:44:08.668950   73561 out.go:358] Setting ErrFile to fd 2...
	I1004 04:44:08.668954   73561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:44:08.669133   73561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:44:08.669696   73561 out.go:352] Setting JSON to false
	I1004 04:44:08.670633   73561 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8794,"bootTime":1728008255,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:44:08.670722   73561 start.go:139] virtualization: kvm guest
	I1004 04:44:08.672859   73561 out.go:177] * [auto-204413] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:44:08.673955   73561 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:44:08.673963   73561 notify.go:220] Checking for updates...
	I1004 04:44:08.676528   73561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:44:08.677771   73561 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:44:08.678956   73561 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:44:08.680102   73561 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:44:08.681184   73561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:44:08.683009   73561 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:44:08.683158   73561 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:44:08.683295   73561 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:44:08.683418   73561 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:44:08.720836   73561 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 04:44:08.721975   73561 start.go:297] selected driver: kvm2
	I1004 04:44:08.721991   73561 start.go:901] validating driver "kvm2" against <nil>
	I1004 04:44:08.722005   73561 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:44:08.722779   73561 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:44:08.722873   73561 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:44:08.737472   73561 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:44:08.737525   73561 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 04:44:08.737819   73561 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:44:08.737855   73561 cni.go:84] Creating CNI manager for ""
	I1004 04:44:08.737905   73561 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:44:08.737915   73561 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 04:44:08.737982   73561 start.go:340] cluster config:
	{Name:auto-204413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:44:08.738103   73561 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:44:08.741029   73561 out.go:177] * Starting "auto-204413" primary control-plane node in "auto-204413" cluster
	I1004 04:44:08.742164   73561 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:44:08.742199   73561 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 04:44:08.742209   73561 cache.go:56] Caching tarball of preloaded images
	I1004 04:44:08.742302   73561 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:44:08.742317   73561 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 04:44:08.742404   73561 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/config.json ...
	I1004 04:44:08.742422   73561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/auto-204413/config.json: {Name:mk930c2cf63e9de79ae21da7ee516f8ac81c09b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:44:08.742578   73561 start.go:360] acquireMachinesLock for auto-204413: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:44:08.742611   73561 start.go:364] duration metric: took 18.275µs to acquireMachinesLock for "auto-204413"
	I1004 04:44:08.742642   73561 start.go:93] Provisioning new machine with config: &{Name:auto-204413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-204413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:44:08.742701   73561 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.561352536Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017051561213427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ed932ba-044a-4b16-9330-e9eca05abdd8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.562691975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95427a98-38b2-4877-abbb-cace9928b0a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.562786304Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95427a98-38b2-4877-abbb-cace9928b0a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.563062732Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015930257912715,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade47f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc71dbacc9f4fe3d3d7366fc1b0b6b6b4f2fa3fa5d4a4b4e577ea6cf1fcb947,PodSandboxId:6c5decc647df41907c1da01451e76185d89e481f142634a98a4f446c0ff3eb4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015910640621213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61784d4d-400f-48bd-9ff5-aa2cdcc3a074,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e,PodSandboxId:f82e381a381ff249350431632919c7c16a3432c89cbac9328088c655294a40f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015906950070910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a5d64c0-542f-4972-b038-e675495a22b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab,PodSandboxId:f1ce6e93011b35981cc3f5b623e91e84f5a3e535d3162400ebc2beb06cfd609e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015899461910306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dvr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b5c79-3995-4de5-ae
b2-da465aeb66dd,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015899390469101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade4
7f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e,PodSandboxId:a82ca4aabbd99765a4c0d4f7ca3907c7106ce9d1336763e2f8fd6ae0c2234a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015894709776309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040cfee45caa04849ca5d3640f501d0b,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09,PodSandboxId:99ac6a716156d2a7970d0e30ae718859564ca5da3fd507b5cfe4a03a0f4e29fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015894689534500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af828b86d14cca95a4d137db49291e92,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6,PodSandboxId:c7b9243060eb547f3917374710b770dceebd61c310dabe87a5baed13c11793b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015894665966003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c43528b6eadbf4f9b537af1521300fc,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38,PodSandboxId:b6b07f874979b29898186370dae210baba8b89361f5a053125884fa3273482d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015894648623461,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84d8f4e17e13e92c39daa0117fee16,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95427a98-38b2-4877-abbb-cace9928b0a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.608698432Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=57e26c9c-2538-430e-9657-3e188cef3f39 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.608793690Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=57e26c9c-2538-430e-9657-3e188cef3f39 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.611543970Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3ef3e02-e680-4b2d-aa79-04e8b88be9ac name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.611894715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017051611870976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3ef3e02-e680-4b2d-aa79-04e8b88be9ac name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.612529806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35d7db67-927d-4e2b-b6bb-70fdb3292e8f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.612628632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35d7db67-927d-4e2b-b6bb-70fdb3292e8f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.612914224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015930257912715,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade47f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc71dbacc9f4fe3d3d7366fc1b0b6b6b4f2fa3fa5d4a4b4e577ea6cf1fcb947,PodSandboxId:6c5decc647df41907c1da01451e76185d89e481f142634a98a4f446c0ff3eb4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015910640621213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61784d4d-400f-48bd-9ff5-aa2cdcc3a074,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e,PodSandboxId:f82e381a381ff249350431632919c7c16a3432c89cbac9328088c655294a40f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015906950070910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a5d64c0-542f-4972-b038-e675495a22b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab,PodSandboxId:f1ce6e93011b35981cc3f5b623e91e84f5a3e535d3162400ebc2beb06cfd609e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015899461910306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dvr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b5c79-3995-4de5-ae
b2-da465aeb66dd,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015899390469101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade4
7f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e,PodSandboxId:a82ca4aabbd99765a4c0d4f7ca3907c7106ce9d1336763e2f8fd6ae0c2234a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015894709776309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040cfee45caa04849ca5d3640f501d0b,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09,PodSandboxId:99ac6a716156d2a7970d0e30ae718859564ca5da3fd507b5cfe4a03a0f4e29fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015894689534500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af828b86d14cca95a4d137db49291e92,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6,PodSandboxId:c7b9243060eb547f3917374710b770dceebd61c310dabe87a5baed13c11793b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015894665966003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c43528b6eadbf4f9b537af1521300fc,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38,PodSandboxId:b6b07f874979b29898186370dae210baba8b89361f5a053125884fa3273482d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015894648623461,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84d8f4e17e13e92c39daa0117fee16,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35d7db67-927d-4e2b-b6bb-70fdb3292e8f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.655048580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee5407ca-b95e-4f8a-8d39-f4cc3916ceb5 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.655174238Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee5407ca-b95e-4f8a-8d39-f4cc3916ceb5 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.656498032Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ca80d24-0357-4769-9c04-c3c2d59939fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.656952784Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017051656927526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ca80d24-0357-4769-9c04-c3c2d59939fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.657893566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07575575-d750-4ba0-92c6-e0013fb0548b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.657988213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07575575-d750-4ba0-92c6-e0013fb0548b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.658339829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015930257912715,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade47f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc71dbacc9f4fe3d3d7366fc1b0b6b6b4f2fa3fa5d4a4b4e577ea6cf1fcb947,PodSandboxId:6c5decc647df41907c1da01451e76185d89e481f142634a98a4f446c0ff3eb4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015910640621213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61784d4d-400f-48bd-9ff5-aa2cdcc3a074,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e,PodSandboxId:f82e381a381ff249350431632919c7c16a3432c89cbac9328088c655294a40f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015906950070910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a5d64c0-542f-4972-b038-e675495a22b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab,PodSandboxId:f1ce6e93011b35981cc3f5b623e91e84f5a3e535d3162400ebc2beb06cfd609e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015899461910306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dvr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b5c79-3995-4de5-ae
b2-da465aeb66dd,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015899390469101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade4
7f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e,PodSandboxId:a82ca4aabbd99765a4c0d4f7ca3907c7106ce9d1336763e2f8fd6ae0c2234a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015894709776309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040cfee45caa04849ca5d3640f501d0b,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09,PodSandboxId:99ac6a716156d2a7970d0e30ae718859564ca5da3fd507b5cfe4a03a0f4e29fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015894689534500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af828b86d14cca95a4d137db49291e92,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6,PodSandboxId:c7b9243060eb547f3917374710b770dceebd61c310dabe87a5baed13c11793b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015894665966003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c43528b6eadbf4f9b537af1521300fc,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38,PodSandboxId:b6b07f874979b29898186370dae210baba8b89361f5a053125884fa3273482d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015894648623461,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84d8f4e17e13e92c39daa0117fee16,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07575575-d750-4ba0-92c6-e0013fb0548b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.697850017Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f8a36a3-5541-4788-a2fd-39e5650d283a name=/runtime.v1.RuntimeService/Version
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.697946817Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f8a36a3-5541-4788-a2fd-39e5650d283a name=/runtime.v1.RuntimeService/Version
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.699077699Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f8a5f27-f5de-49db-9b51-e016f283d17d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.699658580Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017051699626362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f8a5f27-f5de-49db-9b51-e016f283d17d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.700504815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67a2fa9b-e1c8-4e8a-ab56-5e64bef4b67c name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.700571702Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67a2fa9b-e1c8-4e8a-ab56-5e64bef4b67c name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:11 no-preload-658545 crio[708]: time="2024-10-04 04:44:11.700900484Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728015930257912715,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade47f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc71dbacc9f4fe3d3d7366fc1b0b6b6b4f2fa3fa5d4a4b4e577ea6cf1fcb947,PodSandboxId:6c5decc647df41907c1da01451e76185d89e481f142634a98a4f446c0ff3eb4f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728015910640621213,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61784d4d-400f-48bd-9ff5-aa2cdcc3a074,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e,PodSandboxId:f82e381a381ff249350431632919c7c16a3432c89cbac9328088c655294a40f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728015906950070910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a5d64c0-542f-4972-b038-e675495a22b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab,PodSandboxId:f1ce6e93011b35981cc3f5b623e91e84f5a3e535d3162400ebc2beb06cfd609e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728015899461910306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dvr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b5c79-3995-4de5-ae
b2-da465aeb66dd,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28,PodSandboxId:90e3478943ec0363b983d0920560be74fe9cd2768f241e0af091d6bebd927cb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728015899390469101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bf1888-f061-44ad-9c2b-0f2db0ade4
7f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e,PodSandboxId:a82ca4aabbd99765a4c0d4f7ca3907c7106ce9d1336763e2f8fd6ae0c2234a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728015894709776309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040cfee45caa04849ca5d3640f501d0b,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09,PodSandboxId:99ac6a716156d2a7970d0e30ae718859564ca5da3fd507b5cfe4a03a0f4e29fd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728015894689534500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af828b86d14cca95a4d137db49291e92,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6,PodSandboxId:c7b9243060eb547f3917374710b770dceebd61c310dabe87a5baed13c11793b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728015894665966003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c43528b6eadbf4f9b537af1521300fc,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38,PodSandboxId:b6b07f874979b29898186370dae210baba8b89361f5a053125884fa3273482d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728015894648623461,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658545,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f84d8f4e17e13e92c39daa0117fee16,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67a2fa9b-e1c8-4e8a-ab56-5e64bef4b67c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5451845c1793f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   90e3478943ec0       storage-provisioner
	1fc71dbacc9f4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   6c5decc647df4       busybox
	8f0f82fef0d93       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Running             coredns                   1                   f82e381a381ff       coredns-7c65d6cfc9-ppggj
	d3a50dddda4ab       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      19 minutes ago      Running             kube-proxy                1                   f1ce6e93011b3       kube-proxy-dvr6b
	e1cf4915ff1e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   90e3478943ec0       storage-provisioner
	bd0fa97b8409f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      19 minutes ago      Running             kube-scheduler            1                   a82ca4aabbd99       kube-scheduler-no-preload-658545
	1d381a201b984       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      19 minutes ago      Running             kube-apiserver            1                   99ac6a716156d       kube-apiserver-no-preload-658545
	1f1e00105cb78       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      19 minutes ago      Running             kube-controller-manager   1                   c7b9243060eb5       kube-controller-manager-no-preload-658545
	def980019915c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   b6b07f874979b       etcd-no-preload-658545
	
	
	==> coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34774 - 9872 "HINFO IN 4357990399947125494.2098345499879057467. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019654964s
	
	
	==> describe nodes <==
	Name:               no-preload-658545
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-658545
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=no-preload-658545
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T04_15_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 04:15:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-658545
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 04:44:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 04:40:46 +0000   Fri, 04 Oct 2024 04:15:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 04:40:46 +0000   Fri, 04 Oct 2024 04:15:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 04:40:46 +0000   Fri, 04 Oct 2024 04:15:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 04:40:46 +0000   Fri, 04 Oct 2024 04:25:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.54
	  Hostname:    no-preload-658545
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d759497abb79413c9c5a7b20b9f885c4
	  System UUID:                d759497a-bb79-413c-9c5a-7b20b9f885c4
	  Boot ID:                    a5102572-ba28-43f1-a510-6ba9cb4798b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-ppggj                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-658545                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-658545             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-658545    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-dvr6b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-658545             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-zsf86              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m (x2 over 28m)  kubelet          Node no-preload-658545 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x2 over 28m)  kubelet          Node no-preload-658545 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x2 over 28m)  kubelet          Node no-preload-658545 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node no-preload-658545 status is now: NodeReady
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-658545 event: Registered Node no-preload-658545 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-658545 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-658545 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-658545 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-658545 event: Registered Node no-preload-658545 in Controller
	
	
	==> dmesg <==
	[Oct 4 04:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.059273] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.051436] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.085772] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.686255] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.643710] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.625518] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.063207] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066634] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.196955] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.142936] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.309370] systemd-fstab-generator[699]: Ignoring "noauto" option for root device
	[ +16.031370] systemd-fstab-generator[1234]: Ignoring "noauto" option for root device
	[  +0.062665] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.162172] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +3.721915] kauditd_printk_skb: 97 callbacks suppressed
	[Oct 4 04:25] systemd-fstab-generator[1986]: Ignoring "noauto" option for root device
	[  +3.701123] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.637724] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] <==
	{"level":"info","ts":"2024-10-04T04:24:55.040986Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-04T04:24:55.044914Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"770d524238a76c54","local-member-id":"5f41dc21f7a6c607","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:24:55.044956Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T04:24:56.577341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-04T04:24:56.577432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-04T04:24:56.577463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 received MsgPreVoteResp from 5f41dc21f7a6c607 at term 2"}
	{"level":"info","ts":"2024-10-04T04:24:56.577508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 became candidate at term 3"}
	{"level":"info","ts":"2024-10-04T04:24:56.577517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 received MsgVoteResp from 5f41dc21f7a6c607 at term 3"}
	{"level":"info","ts":"2024-10-04T04:24:56.577526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 became leader at term 3"}
	{"level":"info","ts":"2024-10-04T04:24:56.577533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5f41dc21f7a6c607 elected leader 5f41dc21f7a6c607 at term 3"}
	{"level":"info","ts":"2024-10-04T04:24:56.580820Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:24:56.581789Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:24:56.582663Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.54:2379"}
	{"level":"info","ts":"2024-10-04T04:24:56.582951Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T04:24:56.583599Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T04:24:56.580774Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"5f41dc21f7a6c607","local-member-attributes":"{Name:no-preload-658545 ClientURLs:[https://192.168.72.54:2379]}","request-path":"/0/members/5f41dc21f7a6c607/attributes","cluster-id":"770d524238a76c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T04:24:56.584437Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T04:24:56.584465Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T04:24:56.585202Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T04:34:56.642467Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":823}
	{"level":"info","ts":"2024-10-04T04:34:56.651982Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":823,"took":"8.971572ms","hash":2623449157,"current-db-size-bytes":2551808,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2551808,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-10-04T04:34:56.652073Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2623449157,"revision":823,"compact-revision":-1}
	{"level":"info","ts":"2024-10-04T04:39:56.648670Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1065}
	{"level":"info","ts":"2024-10-04T04:39:56.652187Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1065,"took":"3.197168ms","hash":1865415304,"current-db-size-bytes":2551808,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-04T04:39:56.652286Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1865415304,"revision":1065,"compact-revision":823}
	
	
	==> kernel <==
	 04:44:12 up 19 min,  0 users,  load average: 0.05, 0.07, 0.08
	Linux no-preload-658545 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1004 04:39:59.035978       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:39:59.035995       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1004 04:39:59.037057       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:39:59.037175       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 04:40:59.037470       1 handler_proxy.go:99] no RequestInfo found in the context
	W1004 04:40:59.037504       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:40:59.037787       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1004 04:40:59.037693       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1004 04:40:59.038927       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:40:59.038954       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 04:42:59.039745       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:42:59.039874       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1004 04:42:59.039962       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 04:42:59.040016       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1004 04:42:59.040996       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 04:42:59.041033       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] <==
	E1004 04:39:01.708594       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:39:02.247112       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:39:31.714681       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:39:32.254843       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:40:01.721149       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:40:02.261936       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:40:31.727209       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:40:32.269991       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 04:40:46.600288       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-658545"
	E1004 04:41:01.734109       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:41:02.279102       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 04:41:04.026783       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="261.435µs"
	I1004 04:41:16.023860       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="47.547µs"
	E1004 04:41:31.741812       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:41:32.287639       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:42:01.749958       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:42:02.296033       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:42:31.757945       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:42:32.304497       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:43:01.763955       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:43:02.311888       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:43:31.770166       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:43:32.319892       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 04:44:01.777327       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1004 04:44:02.329551       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 04:24:59.828842       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 04:24:59.856881       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.54"]
	E1004 04:24:59.857085       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 04:24:59.982333       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 04:24:59.982566       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 04:24:59.982670       1 server_linux.go:169] "Using iptables Proxier"
	I1004 04:25:00.000354       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 04:25:00.009747       1 server.go:483] "Version info" version="v1.31.1"
	I1004 04:25:00.009853       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:25:00.029155       1 config.go:328] "Starting node config controller"
	I1004 04:25:00.029583       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 04:25:00.031843       1 config.go:199] "Starting service config controller"
	I1004 04:25:00.051045       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 04:25:00.041867       1 config.go:105] "Starting endpoint slice config controller"
	I1004 04:25:00.053461       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 04:25:00.053708       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 04:25:00.132776       1 shared_informer.go:320] Caches are synced for node config
	I1004 04:25:00.152217       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] <==
	I1004 04:24:55.712204       1 serving.go:386] Generated self-signed cert in-memory
	W1004 04:24:57.979783       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 04:24:57.979999       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 04:24:57.980120       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 04:24:57.980157       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 04:24:58.046310       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1004 04:24:58.046583       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 04:24:58.055885       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1004 04:24:58.061540       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1004 04:24:58.061690       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 04:24:58.062313       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 04:24:58.163226       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 04:43:04 no-preload-658545 kubelet[1364]: E1004 04:43:04.011463    1364 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zsf86" podUID="434282d8-7a99-4a76-b5c3-a880cf78ec35"
	Oct 04 04:43:04 no-preload-658545 kubelet[1364]: E1004 04:43:04.241935    1364 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016984241597031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:43:04 no-preload-658545 kubelet[1364]: E1004 04:43:04.242047    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016984241597031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:43:14 no-preload-658545 kubelet[1364]: E1004 04:43:14.243976    1364 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016994243625685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:43:14 no-preload-658545 kubelet[1364]: E1004 04:43:14.244339    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728016994243625685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:43:19 no-preload-658545 kubelet[1364]: E1004 04:43:19.009100    1364 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zsf86" podUID="434282d8-7a99-4a76-b5c3-a880cf78ec35"
	Oct 04 04:43:24 no-preload-658545 kubelet[1364]: E1004 04:43:24.246952    1364 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017004246205789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:43:24 no-preload-658545 kubelet[1364]: E1004 04:43:24.247050    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017004246205789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:43:32 no-preload-658545 kubelet[1364]: E1004 04:43:32.009625    1364 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zsf86" podUID="434282d8-7a99-4a76-b5c3-a880cf78ec35"
	Oct 04 04:43:34 no-preload-658545 kubelet[1364]: E1004 04:43:34.249336    1364 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017014248795634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:43:34 no-preload-658545 kubelet[1364]: E1004 04:43:34.249374    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017014248795634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:43:44 no-preload-658545 kubelet[1364]: E1004 04:43:44.251585    1364 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017024250879221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:43:44 no-preload-658545 kubelet[1364]: E1004 04:43:44.251919    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017024250879221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:43:47 no-preload-658545 kubelet[1364]: E1004 04:43:47.010044    1364 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zsf86" podUID="434282d8-7a99-4a76-b5c3-a880cf78ec35"
	Oct 04 04:43:54 no-preload-658545 kubelet[1364]: E1004 04:43:54.030443    1364 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 04 04:43:54 no-preload-658545 kubelet[1364]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 04 04:43:54 no-preload-658545 kubelet[1364]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 04:43:54 no-preload-658545 kubelet[1364]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 04:43:54 no-preload-658545 kubelet[1364]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 04:43:54 no-preload-658545 kubelet[1364]: E1004 04:43:54.254444    1364 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017034253933415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:43:54 no-preload-658545 kubelet[1364]: E1004 04:43:54.254470    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017034253933415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:44:00 no-preload-658545 kubelet[1364]: E1004 04:44:00.010158    1364 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zsf86" podUID="434282d8-7a99-4a76-b5c3-a880cf78ec35"
	Oct 04 04:44:04 no-preload-658545 kubelet[1364]: E1004 04:44:04.256110    1364 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017044255606728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:44:04 no-preload-658545 kubelet[1364]: E1004 04:44:04.256473    1364 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017044255606728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 04 04:44:11 no-preload-658545 kubelet[1364]: E1004 04:44:11.010495    1364 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zsf86" podUID="434282d8-7a99-4a76-b5c3-a880cf78ec35"
	
	
	==> storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] <==
	I1004 04:25:30.358062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 04:25:30.373088       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 04:25:30.373189       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 04:25:30.382209       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 04:25:30.382665       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5b66a53-6e63-4dde-adfd-df3bb1be9ea0", APIVersion:"v1", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-658545_5e71066c-469a-43ee-917a-9f4f186fd191 became leader
	I1004 04:25:30.382716       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-658545_5e71066c-469a-43ee-917a-9f4f186fd191!
	I1004 04:25:30.482901       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-658545_5e71066c-469a-43ee-917a-9f4f186fd191!
	
	
	==> storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] <==
	I1004 04:24:59.498625       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1004 04:25:29.503995       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-658545 -n no-preload-658545
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-658545 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-zsf86
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-658545 describe pod metrics-server-6867b74b74-zsf86
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-658545 describe pod metrics-server-6867b74b74-zsf86: exit status 1 (63.121121ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-zsf86" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-658545 describe pod metrics-server-6867b74b74-zsf86: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (344.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (178.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
E1004 04:42:08.994406   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
E1004 04:42:15.014688   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.146:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.146:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-420062 -n old-k8s-version-420062
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-420062 -n old-k8s-version-420062: exit status 2 (235.860959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-420062" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-420062 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-420062 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.623µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-420062 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062: exit status 2 (221.088493ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-420062 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-420062 logs -n 25: (1.606301778s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-934812            | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-617497             | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-617497                  | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-617497 --memory=2200 --alsologtostderr   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:16 UTC | 04 Oct 24 04:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-617497 image list                           | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| delete  | -p newest-cni-617497                                   | newest-cni-617497            | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:17 UTC | 04 Oct 24 04:18 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658545                  | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-658545                                   | no-preload-658545            | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281471  | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC | 04 Oct 24 04:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-420062        | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-934812                 | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-934812                                  | embed-certs-934812           | jenkins | v1.34.0 | 04 Oct 24 04:19 UTC | 04 Oct 24 04:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-420062             | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC | 04 Oct 24 04:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-420062                              | old-k8s-version-420062       | jenkins | v1.34.0 | 04 Oct 24 04:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281471       | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281471 | jenkins | v1.34.0 | 04 Oct 24 04:21 UTC | 04 Oct 24 04:28 UTC |
	|         | default-k8s-diff-port-281471                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 04:21:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 04:21:23.276574   67541 out.go:345] Setting OutFile to fd 1 ...
	I1004 04:21:23.276701   67541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:21:23.276710   67541 out.go:358] Setting ErrFile to fd 2...
	I1004 04:21:23.276715   67541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 04:21:23.276893   67541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 04:21:23.277439   67541 out.go:352] Setting JSON to false
	I1004 04:21:23.278387   67541 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7428,"bootTime":1728008255,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 04:21:23.278482   67541 start.go:139] virtualization: kvm guest
	I1004 04:21:23.280571   67541 out.go:177] * [default-k8s-diff-port-281471] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 04:21:23.282033   67541 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 04:21:23.282063   67541 notify.go:220] Checking for updates...
	I1004 04:21:23.284454   67541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 04:21:23.285843   67541 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:21:23.287026   67541 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 04:21:23.288328   67541 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 04:21:23.289544   67541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 04:21:23.291321   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:21:23.291979   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:21:23.292059   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:21:23.306995   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I1004 04:21:23.307440   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:21:23.308080   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:21:23.308106   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:21:23.308442   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:21:23.308642   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:21:23.308893   67541 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 04:21:23.309208   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:21:23.309280   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:21:23.323807   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I1004 04:21:23.324281   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:21:23.324777   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:21:23.324797   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:21:23.325085   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:21:23.325248   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:21:23.359916   67541 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 04:21:23.361482   67541 start.go:297] selected driver: kvm2
	I1004 04:21:23.361504   67541 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:21:23.361657   67541 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 04:21:23.362533   67541 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:21:23.362621   67541 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 04:21:23.378088   67541 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 04:21:23.378515   67541 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:21:23.378547   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:21:23.378591   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:21:23.378627   67541 start.go:340] cluster config:
	{Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:21:23.378727   67541 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 04:21:23.380705   67541 out.go:177] * Starting "default-k8s-diff-port-281471" primary control-plane node in "default-k8s-diff-port-281471" cluster
	I1004 04:21:20.068102   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:23.140106   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:23.381986   67541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:21:23.382036   67541 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 04:21:23.382048   67541 cache.go:56] Caching tarball of preloaded images
	I1004 04:21:23.382125   67541 preload.go:172] Found /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 04:21:23.382135   67541 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1004 04:21:23.382254   67541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/config.json ...
	I1004 04:21:23.382433   67541 start.go:360] acquireMachinesLock for default-k8s-diff-port-281471: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:21:29.220163   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:32.292105   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:38.372080   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:41.444091   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:47.524103   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:50.596091   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:56.676086   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:21:59.748055   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:05.828125   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:08.900042   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:14.980094   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:18.052114   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:24.132087   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:27.204139   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:33.284040   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:36.356076   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:42.436190   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:45.508075   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:51.588061   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:22:54.660042   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:00.740141   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:03.812099   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:09.892076   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:12.964133   66293 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.54:22: connect: no route to host
	I1004 04:23:15.968919   66755 start.go:364] duration metric: took 4m6.72532498s to acquireMachinesLock for "embed-certs-934812"
	I1004 04:23:15.968984   66755 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:15.968992   66755 fix.go:54] fixHost starting: 
	I1004 04:23:15.969309   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:15.969356   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:15.984739   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I1004 04:23:15.985214   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:15.985743   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:23:15.985769   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:15.986104   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:15.986289   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:15.986449   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:23:15.988237   66755 fix.go:112] recreateIfNeeded on embed-certs-934812: state=Stopped err=<nil>
	I1004 04:23:15.988263   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	W1004 04:23:15.988415   66755 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:15.990473   66755 out.go:177] * Restarting existing kvm2 VM for "embed-certs-934812" ...
	I1004 04:23:15.965929   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:15.965974   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:23:15.966321   66293 buildroot.go:166] provisioning hostname "no-preload-658545"
	I1004 04:23:15.966348   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:23:15.966530   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:23:15.968760   66293 machine.go:96] duration metric: took 4m37.423316886s to provisionDockerMachine
	I1004 04:23:15.968806   66293 fix.go:56] duration metric: took 4m37.446149084s for fixHost
	I1004 04:23:15.968814   66293 start.go:83] releasing machines lock for "no-preload-658545", held for 4m37.446179902s
	W1004 04:23:15.968836   66293 start.go:714] error starting host: provision: host is not running
	W1004 04:23:15.968935   66293 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1004 04:23:15.968946   66293 start.go:729] Will try again in 5 seconds ...
	I1004 04:23:15.991914   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Start
	I1004 04:23:15.992106   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring networks are active...
	I1004 04:23:15.992995   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring network default is active
	I1004 04:23:15.993392   66755 main.go:141] libmachine: (embed-certs-934812) Ensuring network mk-embed-certs-934812 is active
	I1004 04:23:15.993728   66755 main.go:141] libmachine: (embed-certs-934812) Getting domain xml...
	I1004 04:23:15.994410   66755 main.go:141] libmachine: (embed-certs-934812) Creating domain...
	I1004 04:23:17.232262   66755 main.go:141] libmachine: (embed-certs-934812) Waiting to get IP...
	I1004 04:23:17.233339   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.233793   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.233879   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.233797   67957 retry.go:31] will retry after 221.075745ms: waiting for machine to come up
	I1004 04:23:17.456413   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.456917   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.456941   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.456869   67957 retry.go:31] will retry after 354.386237ms: waiting for machine to come up
	I1004 04:23:17.812523   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:17.812949   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:17.812973   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:17.812905   67957 retry.go:31] will retry after 338.999517ms: waiting for machine to come up
	I1004 04:23:18.153589   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:18.154029   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:18.154056   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:18.153987   67957 retry.go:31] will retry after 555.533205ms: waiting for machine to come up
	I1004 04:23:18.710680   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:18.711155   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:18.711181   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:18.711104   67957 retry.go:31] will retry after 733.812197ms: waiting for machine to come up
	I1004 04:23:20.970507   66293 start.go:360] acquireMachinesLock for no-preload-658545: {Name:mkd1b6a549425547fb7f1523c6881d37bca41fc3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 04:23:19.447202   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:19.447644   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:19.447671   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:19.447600   67957 retry.go:31] will retry after 575.303848ms: waiting for machine to come up
	I1004 04:23:20.024465   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:20.024788   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:20.024819   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:20.024735   67957 retry.go:31] will retry after 894.593683ms: waiting for machine to come up
	I1004 04:23:20.920880   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:20.921499   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:20.921522   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:20.921480   67957 retry.go:31] will retry after 924.978895ms: waiting for machine to come up
	I1004 04:23:21.848064   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:21.848498   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:21.848619   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:21.848550   67957 retry.go:31] will retry after 1.554806984s: waiting for machine to come up
	I1004 04:23:23.404569   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:23.404936   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:23.404964   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:23.404884   67957 retry.go:31] will retry after 1.700496318s: waiting for machine to come up
	I1004 04:23:25.106988   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:25.107410   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:25.107441   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:25.107351   67957 retry.go:31] will retry after 1.913555474s: waiting for machine to come up
	I1004 04:23:27.022672   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:27.023134   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:27.023161   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:27.023096   67957 retry.go:31] will retry after 3.208946613s: waiting for machine to come up
	I1004 04:23:30.235462   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:30.235910   66755 main.go:141] libmachine: (embed-certs-934812) DBG | unable to find current IP address of domain embed-certs-934812 in network mk-embed-certs-934812
	I1004 04:23:30.235942   66755 main.go:141] libmachine: (embed-certs-934812) DBG | I1004 04:23:30.235868   67957 retry.go:31] will retry after 3.125545279s: waiting for machine to come up
	I1004 04:23:33.364563   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.365007   66755 main.go:141] libmachine: (embed-certs-934812) Found IP for machine: 192.168.61.74
	I1004 04:23:33.365031   66755 main.go:141] libmachine: (embed-certs-934812) Reserving static IP address...
	I1004 04:23:33.365047   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has current primary IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.365595   66755 main.go:141] libmachine: (embed-certs-934812) Reserved static IP address: 192.168.61.74
	I1004 04:23:33.365628   66755 main.go:141] libmachine: (embed-certs-934812) Waiting for SSH to be available...
	I1004 04:23:33.365648   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "embed-certs-934812", mac: "52:54:00:25:fb:50", ip: "192.168.61.74"} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.365667   66755 main.go:141] libmachine: (embed-certs-934812) DBG | skip adding static IP to network mk-embed-certs-934812 - found existing host DHCP lease matching {name: "embed-certs-934812", mac: "52:54:00:25:fb:50", ip: "192.168.61.74"}
	I1004 04:23:33.365682   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Getting to WaitForSSH function...
	I1004 04:23:33.367835   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.368155   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.368185   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.368297   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Using SSH client type: external
	I1004 04:23:33.368322   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa (-rw-------)
	I1004 04:23:33.368359   66755 main.go:141] libmachine: (embed-certs-934812) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:23:33.368369   66755 main.go:141] libmachine: (embed-certs-934812) DBG | About to run SSH command:
	I1004 04:23:33.368377   66755 main.go:141] libmachine: (embed-certs-934812) DBG | exit 0
	I1004 04:23:33.496067   66755 main.go:141] libmachine: (embed-certs-934812) DBG | SSH cmd err, output: <nil>: 
	I1004 04:23:33.496559   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetConfigRaw
	I1004 04:23:33.497310   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:33.500858   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.501360   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.501403   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.501750   66755 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/config.json ...
	I1004 04:23:33.502058   66755 machine.go:93] provisionDockerMachine start ...
	I1004 04:23:33.502084   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:33.502303   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.505899   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.506442   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.506475   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.506686   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.506947   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.507165   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.507324   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.507541   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.507744   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.507757   66755 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:23:33.624518   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:23:33.624547   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.624795   66755 buildroot.go:166] provisioning hostname "embed-certs-934812"
	I1004 04:23:33.624826   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.625021   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.627597   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.627916   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.627948   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.628115   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.628312   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.628444   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.628608   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.628785   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.629023   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.629040   66755 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-934812 && echo "embed-certs-934812" | sudo tee /etc/hostname
	I1004 04:23:33.758642   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-934812
	
	I1004 04:23:33.758681   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.761325   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.761654   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.761696   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.761849   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:33.762034   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.762164   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:33.762297   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:33.762426   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:33.762636   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:33.762652   66755 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-934812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-934812/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-934812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:23:33.889571   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:33.889601   66755 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:23:33.889642   66755 buildroot.go:174] setting up certificates
	I1004 04:23:33.889654   66755 provision.go:84] configureAuth start
	I1004 04:23:33.889681   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetMachineName
	I1004 04:23:33.889992   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:33.892657   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.893063   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.893087   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.893310   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:33.895770   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.896126   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:33.896162   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:33.896328   66755 provision.go:143] copyHostCerts
	I1004 04:23:33.896397   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:23:33.896408   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:23:33.896472   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:23:33.896565   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:23:33.896573   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:23:33.896595   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:23:33.896652   66755 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:23:33.896659   66755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:23:33.896678   66755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:23:33.896724   66755 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.embed-certs-934812 san=[127.0.0.1 192.168.61.74 embed-certs-934812 localhost minikube]
	I1004 04:23:33.997867   66755 provision.go:177] copyRemoteCerts
	I1004 04:23:33.997923   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:23:33.997950   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.001050   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.001422   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.001461   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.001733   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.001961   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.002125   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.002246   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.090823   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:23:34.116934   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1004 04:23:34.669084   67282 start.go:364] duration metric: took 2m46.052475725s to acquireMachinesLock for "old-k8s-version-420062"
	I1004 04:23:34.669158   67282 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:34.669168   67282 fix.go:54] fixHost starting: 
	I1004 04:23:34.669584   67282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:34.669640   67282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:34.686790   67282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I1004 04:23:34.687312   67282 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:34.687829   67282 main.go:141] libmachine: Using API Version  1
	I1004 04:23:34.687857   67282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:34.688238   67282 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:34.688415   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:34.688579   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetState
	I1004 04:23:34.690288   67282 fix.go:112] recreateIfNeeded on old-k8s-version-420062: state=Stopped err=<nil>
	I1004 04:23:34.690326   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	W1004 04:23:34.690467   67282 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:34.692283   67282 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-420062" ...
	I1004 04:23:34.143763   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 04:23:34.168897   66755 provision.go:87] duration metric: took 279.227966ms to configureAuth
	I1004 04:23:34.168929   66755 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:23:34.169096   66755 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:23:34.169168   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.171638   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.171952   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.171977   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.172178   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.172349   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.172503   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.172594   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.172717   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:34.172924   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:34.172943   66755 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:23:34.411661   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:23:34.411690   66755 machine.go:96] duration metric: took 909.61315ms to provisionDockerMachine
	I1004 04:23:34.411703   66755 start.go:293] postStartSetup for "embed-certs-934812" (driver="kvm2")
	I1004 04:23:34.411716   66755 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:23:34.411734   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.412070   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:23:34.412099   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.415246   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.415583   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.415643   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.415802   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.415997   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.416170   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.416322   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.507385   66755 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:23:34.511963   66755 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:23:34.511990   66755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:23:34.512064   66755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:23:34.512152   66755 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:23:34.512270   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:23:34.522375   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:34.547860   66755 start.go:296] duration metric: took 136.143527ms for postStartSetup
	I1004 04:23:34.547904   66755 fix.go:56] duration metric: took 18.578910472s for fixHost
	I1004 04:23:34.547931   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.550715   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.551031   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.551067   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.551194   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.551391   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.551568   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.551724   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.551903   66755 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:34.552055   66755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.74 22 <nil> <nil>}
	I1004 04:23:34.552064   66755 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:23:34.668944   66755 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015814.641353752
	
	I1004 04:23:34.668966   66755 fix.go:216] guest clock: 1728015814.641353752
	I1004 04:23:34.668974   66755 fix.go:229] Guest: 2024-10-04 04:23:34.641353752 +0000 UTC Remote: 2024-10-04 04:23:34.547909289 +0000 UTC m=+265.449211021 (delta=93.444463ms)
	I1004 04:23:34.668993   66755 fix.go:200] guest clock delta is within tolerance: 93.444463ms
	I1004 04:23:34.668999   66755 start.go:83] releasing machines lock for "embed-certs-934812", held for 18.70003051s
	I1004 04:23:34.669024   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.669299   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:34.672346   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.672757   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.672796   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.672966   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673609   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673816   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:23:34.673940   66755 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:23:34.673982   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.674020   66755 ssh_runner.go:195] Run: cat /version.json
	I1004 04:23:34.674043   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:23:34.676934   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677085   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677379   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.677406   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677449   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:34.677480   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:34.677560   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.677677   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:23:34.677758   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.677811   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:23:34.677873   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.677928   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:23:34.677979   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.678022   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:23:34.761509   66755 ssh_runner.go:195] Run: systemctl --version
	I1004 04:23:34.784487   66755 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:23:34.934037   66755 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:23:34.942569   66755 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:23:34.942642   66755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:23:34.960164   66755 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:23:34.960197   66755 start.go:495] detecting cgroup driver to use...
	I1004 04:23:34.960276   66755 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:23:34.979195   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:23:34.994660   66755 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:23:34.994747   66755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:23:35.011209   66755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:23:35.031746   66755 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:23:35.146164   66755 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:23:35.287092   66755 docker.go:233] disabling docker service ...
	I1004 04:23:35.287167   66755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:23:35.308007   66755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:23:35.323235   66755 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:23:35.473583   66755 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:23:35.610098   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:23:35.624276   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:23:35.643810   66755 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:23:35.643873   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.655804   66755 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:23:35.655875   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.668260   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.679770   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.692649   66755 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:23:35.704364   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.715539   66755 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.739272   66755 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:35.754538   66755 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:23:35.766476   66755 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:23:35.766566   66755 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:23:35.781677   66755 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:23:35.792640   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:35.910787   66755 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:23:36.015877   66755 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:23:36.015948   66755 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:23:36.021573   66755 start.go:563] Will wait 60s for crictl version
	I1004 04:23:36.021642   66755 ssh_runner.go:195] Run: which crictl
	I1004 04:23:36.025605   66755 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:23:36.064644   66755 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:23:36.064714   66755 ssh_runner.go:195] Run: crio --version
	I1004 04:23:36.094751   66755 ssh_runner.go:195] Run: crio --version
	I1004 04:23:36.127213   66755 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:23:34.693590   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .Start
	I1004 04:23:34.693792   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring networks are active...
	I1004 04:23:34.694582   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network default is active
	I1004 04:23:34.694917   67282 main.go:141] libmachine: (old-k8s-version-420062) Ensuring network mk-old-k8s-version-420062 is active
	I1004 04:23:34.695322   67282 main.go:141] libmachine: (old-k8s-version-420062) Getting domain xml...
	I1004 04:23:34.696052   67282 main.go:141] libmachine: (old-k8s-version-420062) Creating domain...
	I1004 04:23:35.995511   67282 main.go:141] libmachine: (old-k8s-version-420062) Waiting to get IP...
	I1004 04:23:35.996465   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:35.996962   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:35.997031   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:35.996923   68093 retry.go:31] will retry after 296.620059ms: waiting for machine to come up
	I1004 04:23:36.295737   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:36.296226   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:36.296257   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:36.296182   68093 retry.go:31] will retry after 311.736827ms: waiting for machine to come up
	I1004 04:23:36.610158   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:36.610804   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:36.610829   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:36.610759   68093 retry.go:31] will retry after 440.646496ms: waiting for machine to come up
	I1004 04:23:37.053487   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:37.053956   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:37.053981   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:37.053923   68093 retry.go:31] will retry after 550.190101ms: waiting for machine to come up
	I1004 04:23:37.605404   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:37.605775   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:37.605815   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:37.605743   68093 retry.go:31] will retry after 721.648529ms: waiting for machine to come up
	I1004 04:23:38.328819   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:38.329323   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:38.329362   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:38.329281   68093 retry.go:31] will retry after 825.234448ms: waiting for machine to come up
	I1004 04:23:36.128549   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetIP
	I1004 04:23:36.131439   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:36.131827   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:23:36.131856   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:23:36.132054   66755 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1004 04:23:36.136650   66755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:36.149563   66755 kubeadm.go:883] updating cluster {Name:embed-certs-934812 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:23:36.149691   66755 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:23:36.149738   66755 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:36.188235   66755 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:23:36.188316   66755 ssh_runner.go:195] Run: which lz4
	I1004 04:23:36.192619   66755 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:23:36.196876   66755 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:23:36.196909   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:23:37.711672   66755 crio.go:462] duration metric: took 1.519102092s to copy over tarball
	I1004 04:23:37.711752   66755 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:23:39.155736   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:39.156199   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:39.156229   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:39.156150   68093 retry.go:31] will retry after 970.793402ms: waiting for machine to come up
	I1004 04:23:40.128963   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:40.129454   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:40.129507   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:40.129419   68093 retry.go:31] will retry after 1.460395601s: waiting for machine to come up
	I1004 04:23:41.592145   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:41.592653   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:41.592677   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:41.592600   68093 retry.go:31] will retry after 1.397092356s: waiting for machine to come up
	I1004 04:23:42.992176   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:42.992670   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:42.992724   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:42.992663   68093 retry.go:31] will retry after 1.560294099s: waiting for machine to come up
	I1004 04:23:39.864408   66755 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.152629063s)
	I1004 04:23:39.864437   66755 crio.go:469] duration metric: took 2.152732931s to extract the tarball
	I1004 04:23:39.864446   66755 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:23:39.902496   66755 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:39.956348   66755 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:23:39.956373   66755 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:23:39.956381   66755 kubeadm.go:934] updating node { 192.168.61.74 8443 v1.31.1 crio true true} ...
	I1004 04:23:39.956509   66755 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-934812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:23:39.956572   66755 ssh_runner.go:195] Run: crio config
	I1004 04:23:40.014396   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:23:40.014423   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:23:40.014436   66755 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:23:40.014470   66755 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.74 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-934812 NodeName:embed-certs-934812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:23:40.014642   66755 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-934812"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:23:40.014728   66755 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:23:40.025328   66755 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:23:40.025441   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:23:40.035733   66755 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1004 04:23:40.057427   66755 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:23:40.078636   66755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1004 04:23:40.100583   66755 ssh_runner.go:195] Run: grep 192.168.61.74	control-plane.minikube.internal$ /etc/hosts
	I1004 04:23:40.104780   66755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:40.118484   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:40.245425   66755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:23:40.268739   66755 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812 for IP: 192.168.61.74
	I1004 04:23:40.268764   66755 certs.go:194] generating shared ca certs ...
	I1004 04:23:40.268792   66755 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:23:40.268962   66755 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:23:40.269022   66755 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:23:40.269035   66755 certs.go:256] generating profile certs ...
	I1004 04:23:40.269145   66755 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/client.key
	I1004 04:23:40.269226   66755 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.key.0181efa9
	I1004 04:23:40.269290   66755 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.key
	I1004 04:23:40.269436   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:23:40.269483   66755 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:23:40.269497   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:23:40.269535   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:23:40.269575   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:23:40.269607   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:23:40.269658   66755 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:40.270269   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:23:40.316579   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:23:40.352928   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:23:40.383124   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:23:40.410211   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1004 04:23:40.442388   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:23:40.473580   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:23:40.501589   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/embed-certs-934812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:23:40.527299   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:23:40.551994   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:23:40.576644   66755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:23:40.601518   66755 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:23:40.620092   66755 ssh_runner.go:195] Run: openssl version
	I1004 04:23:40.626451   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:23:40.637754   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.642413   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.642472   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:23:40.648449   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:23:40.659371   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:23:40.670276   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.674793   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.674844   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:23:40.680550   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:23:40.691439   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:23:40.702237   66755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.706876   66755 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.706937   66755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:23:40.712970   66755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:23:40.724505   66755 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:23:40.729486   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:23:40.735720   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:23:40.741680   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:23:40.747975   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:23:40.754056   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:23:40.760235   66755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:23:40.766463   66755 kubeadm.go:392] StartCluster: {Name:embed-certs-934812 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:embed-certs-934812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:23:40.766576   66755 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:23:40.766635   66755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:23:40.805927   66755 cri.go:89] found id: ""
	I1004 04:23:40.805995   66755 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:23:40.816693   66755 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:23:40.816717   66755 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:23:40.816770   66755 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:23:40.827024   66755 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:23:40.828056   66755 kubeconfig.go:125] found "embed-certs-934812" server: "https://192.168.61.74:8443"
	I1004 04:23:40.830076   66755 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:23:40.840637   66755 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.74
	I1004 04:23:40.840673   66755 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:23:40.840686   66755 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:23:40.840741   66755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:23:40.877659   66755 cri.go:89] found id: ""
	I1004 04:23:40.877737   66755 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:23:40.894712   66755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:23:40.904202   66755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:23:40.904224   66755 kubeadm.go:157] found existing configuration files:
	
	I1004 04:23:40.904290   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:23:40.913941   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:23:40.914003   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:23:40.924730   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:23:40.934706   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:23:40.934784   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:23:40.945008   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:23:40.954864   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:23:40.954949   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:23:40.965357   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:23:40.975380   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:23:40.975459   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:23:40.986157   66755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:23:41.001260   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:41.129150   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:41.839910   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.059079   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.132717   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:42.204227   66755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:23:42.204389   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:42.704572   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.205099   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.704555   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:23:43.720983   66755 api_server.go:72] duration metric: took 1.516755506s to wait for apiserver process to appear ...
	I1004 04:23:43.721020   66755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:23:43.721043   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.578729   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:23:46.578764   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:23:46.578780   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.611578   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:23:46.611609   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:23:46.721894   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:46.728611   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:46.728649   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:47.221889   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:47.229348   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:47.229382   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:47.721971   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:47.741433   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:23:47.741460   66755 api_server.go:103] status: https://192.168.61.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:23:48.222154   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:23:48.226802   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 200:
	ok
	I1004 04:23:48.233611   66755 api_server.go:141] control plane version: v1.31.1
	I1004 04:23:48.233645   66755 api_server.go:131] duration metric: took 4.512616682s to wait for apiserver health ...
	I1004 04:23:48.233655   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:23:48.233662   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:23:48.235421   66755 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:23:44.555619   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:44.556128   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:44.556154   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:44.556061   68093 retry.go:31] will retry after 2.564674777s: waiting for machine to come up
	I1004 04:23:47.123819   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:47.124235   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:47.124263   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:47.124181   68093 retry.go:31] will retry after 2.408805702s: waiting for machine to come up
	I1004 04:23:48.236675   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:23:48.248304   66755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:23:48.273584   66755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:23:48.288132   66755 system_pods.go:59] 8 kube-system pods found
	I1004 04:23:48.288174   66755 system_pods.go:61] "coredns-7c65d6cfc9-z7pqn" [f206a8bf-5c18-49f2-9fae-a48a38d608a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:23:48.288208   66755 system_pods.go:61] "etcd-embed-certs-934812" [07a8f2db-6d47-469b-b0e4-749d1e106522] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:23:48.288218   66755 system_pods.go:61] "kube-apiserver-embed-certs-934812" [f36bc69a-a04e-40c2-8f78-a983ddbf28aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:23:48.288227   66755 system_pods.go:61] "kube-controller-manager-embed-certs-934812" [06d73118-fa31-4c98-b1e8-099611718b19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:23:48.288232   66755 system_pods.go:61] "kube-proxy-9qpgb" [6d833f16-4b8e-4409-99b6-214babe699c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 04:23:48.288238   66755 system_pods.go:61] "kube-scheduler-embed-certs-934812" [d076a245-49b6-4d8b-949a-2b559cd1d4d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:23:48.288243   66755 system_pods.go:61] "metrics-server-6867b74b74-d5b6b" [f4ec5d83-22a7-49e5-97e9-3519a29484fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:23:48.288250   66755 system_pods.go:61] "storage-provisioner" [2e76a95b-d6e2-4c1d-b954-3da8c2670a4a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 04:23:48.288259   66755 system_pods.go:74] duration metric: took 14.644463ms to wait for pod list to return data ...
	I1004 04:23:48.288265   66755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:23:48.293121   66755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:23:48.293153   66755 node_conditions.go:123] node cpu capacity is 2
	I1004 04:23:48.293166   66755 node_conditions.go:105] duration metric: took 4.895489ms to run NodePressure ...
	I1004 04:23:48.293184   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:23:48.633398   66755 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:23:48.639243   66755 kubeadm.go:739] kubelet initialised
	I1004 04:23:48.639282   66755 kubeadm.go:740] duration metric: took 5.842777ms waiting for restarted kubelet to initialise ...
	I1004 04:23:48.639293   66755 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:23:48.650460   66755 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:49.535979   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:49.536361   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | unable to find current IP address of domain old-k8s-version-420062 in network mk-old-k8s-version-420062
	I1004 04:23:49.536388   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | I1004 04:23:49.536332   68093 retry.go:31] will retry after 4.242056709s: waiting for machine to come up
	I1004 04:23:50.657094   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:52.657717   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:55.089234   67541 start.go:364] duration metric: took 2m31.706739813s to acquireMachinesLock for "default-k8s-diff-port-281471"
	I1004 04:23:55.089300   67541 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:23:55.089311   67541 fix.go:54] fixHost starting: 
	I1004 04:23:55.089673   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:23:55.089718   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:23:55.110154   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I1004 04:23:55.110566   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:23:55.111001   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:23:55.111025   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:23:55.111417   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:23:55.111627   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:23:55.111794   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:23:55.113328   67541 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281471: state=Stopped err=<nil>
	I1004 04:23:55.113356   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	W1004 04:23:55.113537   67541 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:23:55.115190   67541 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-281471" ...
	I1004 04:23:53.783128   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.783631   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has current primary IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.783669   67282 main.go:141] libmachine: (old-k8s-version-420062) Found IP for machine: 192.168.50.146
	I1004 04:23:53.783684   67282 main.go:141] libmachine: (old-k8s-version-420062) Reserving static IP address...
	I1004 04:23:53.784173   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.784206   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | skip adding static IP to network mk-old-k8s-version-420062 - found existing host DHCP lease matching {name: "old-k8s-version-420062", mac: "52:54:00:fb:e4:4f", ip: "192.168.50.146"}
	I1004 04:23:53.784222   67282 main.go:141] libmachine: (old-k8s-version-420062) Reserved static IP address: 192.168.50.146
	I1004 04:23:53.784238   67282 main.go:141] libmachine: (old-k8s-version-420062) Waiting for SSH to be available...
	I1004 04:23:53.784250   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Getting to WaitForSSH function...
	I1004 04:23:53.786551   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.786985   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.787016   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.787207   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH client type: external
	I1004 04:23:53.787244   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa (-rw-------)
	I1004 04:23:53.787285   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:23:53.787301   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | About to run SSH command:
	I1004 04:23:53.787315   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | exit 0
	I1004 04:23:53.916121   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | SSH cmd err, output: <nil>: 
	I1004 04:23:53.916487   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetConfigRaw
	I1004 04:23:53.917200   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:53.919846   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.920295   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.920323   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.920641   67282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/config.json ...
	I1004 04:23:53.920902   67282 machine.go:93] provisionDockerMachine start ...
	I1004 04:23:53.920930   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:53.921137   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:53.923647   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.924000   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:53.924039   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:53.924198   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:53.924375   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:53.924508   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:53.924659   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:53.924796   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:53.925024   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:53.925036   67282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:23:54.044565   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:23:54.044595   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.044820   67282 buildroot.go:166] provisioning hostname "old-k8s-version-420062"
	I1004 04:23:54.044837   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.045006   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.047682   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.048032   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.048060   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.048186   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.048376   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.048525   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.048694   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.048853   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.049077   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.049098   67282 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-420062 && echo "old-k8s-version-420062" | sudo tee /etc/hostname
	I1004 04:23:54.183772   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-420062
	
	I1004 04:23:54.183835   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.186969   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.187333   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.187368   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.187754   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.188000   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.188177   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.188334   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.188559   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.188778   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.188803   67282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-420062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-420062/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-420062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:23:54.313827   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:23:54.313852   67282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:23:54.313896   67282 buildroot.go:174] setting up certificates
	I1004 04:23:54.313913   67282 provision.go:84] configureAuth start
	I1004 04:23:54.313925   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetMachineName
	I1004 04:23:54.314208   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:54.317028   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.317378   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.317408   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.317549   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.320292   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.320690   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.320718   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.320874   67282 provision.go:143] copyHostCerts
	I1004 04:23:54.320945   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:23:54.320957   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:23:54.321020   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:23:54.321144   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:23:54.321157   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:23:54.321184   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:23:54.321269   67282 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:23:54.321279   67282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:23:54.321306   67282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:23:54.321378   67282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-420062 san=[127.0.0.1 192.168.50.146 localhost minikube old-k8s-version-420062]
	I1004 04:23:54.395370   67282 provision.go:177] copyRemoteCerts
	I1004 04:23:54.395422   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:23:54.395452   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.398647   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.399153   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.399194   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.399392   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.399582   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.399852   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.399991   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:54.491055   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:23:54.523206   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1004 04:23:54.549843   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:23:54.580403   67282 provision.go:87] duration metric: took 266.475364ms to configureAuth
	I1004 04:23:54.580438   67282 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:23:54.580645   67282 config.go:182] Loaded profile config "old-k8s-version-420062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1004 04:23:54.580736   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.583200   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.583489   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.583522   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.583672   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.583871   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.584066   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.584195   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.584402   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.584567   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.584582   67282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:23:54.835402   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:23:54.835436   67282 machine.go:96] duration metric: took 914.509404ms to provisionDockerMachine
	I1004 04:23:54.835451   67282 start.go:293] postStartSetup for "old-k8s-version-420062" (driver="kvm2")
	I1004 04:23:54.835466   67282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:23:54.835491   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:54.835870   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:23:54.835902   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.838257   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.838645   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.838670   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.838810   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.838972   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.839117   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.839247   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:54.927041   67282 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:23:54.931330   67282 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:23:54.931357   67282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:23:54.931424   67282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:23:54.931538   67282 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:23:54.931658   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:23:54.941402   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:23:54.967433   67282 start.go:296] duration metric: took 131.968424ms for postStartSetup
	I1004 04:23:54.967495   67282 fix.go:56] duration metric: took 20.29830643s for fixHost
	I1004 04:23:54.967523   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:54.970138   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.970485   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:54.970502   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:54.970802   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:54.971000   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.971164   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:54.971330   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:54.971560   67282 main.go:141] libmachine: Using SSH client type: native
	I1004 04:23:54.971739   67282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.146 22 <nil> <nil>}
	I1004 04:23:54.971751   67282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:23:55.089031   67282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015835.056238818
	
	I1004 04:23:55.089054   67282 fix.go:216] guest clock: 1728015835.056238818
	I1004 04:23:55.089063   67282 fix.go:229] Guest: 2024-10-04 04:23:55.056238818 +0000 UTC Remote: 2024-10-04 04:23:54.967501465 +0000 UTC m=+186.499621032 (delta=88.737353ms)
	I1004 04:23:55.089086   67282 fix.go:200] guest clock delta is within tolerance: 88.737353ms
	I1004 04:23:55.089093   67282 start.go:83] releasing machines lock for "old-k8s-version-420062", held for 20.419961099s
	I1004 04:23:55.089124   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.089472   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:55.092047   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.092519   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.092552   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.092784   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093364   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093566   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .DriverName
	I1004 04:23:55.093670   67282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:23:55.093715   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:55.093808   67282 ssh_runner.go:195] Run: cat /version.json
	I1004 04:23:55.093834   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHHostname
	I1004 04:23:55.096451   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.096829   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.096862   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.096881   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.097173   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:55.097364   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:55.097446   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:55.097474   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:55.097548   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:55.097685   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHPort
	I1004 04:23:55.097816   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHKeyPath
	I1004 04:23:55.097823   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:55.097953   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetSSHUsername
	I1004 04:23:55.098106   67282 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/old-k8s-version-420062/id_rsa Username:docker}
	I1004 04:23:55.207195   67282 ssh_runner.go:195] Run: systemctl --version
	I1004 04:23:55.214080   67282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:23:55.369882   67282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:23:55.376111   67282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:23:55.376171   67282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:23:55.393916   67282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:23:55.393945   67282 start.go:495] detecting cgroup driver to use...
	I1004 04:23:55.394015   67282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:23:55.411330   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:23:55.427665   67282 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:23:55.427734   67282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:23:55.445180   67282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:23:55.465131   67282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:23:55.596260   67282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:23:55.781647   67282 docker.go:233] disabling docker service ...
	I1004 04:23:55.781711   67282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:23:55.801252   67282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:23:55.817688   67282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:23:55.952563   67282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:23:56.081096   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:23:56.096194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:23:56.116859   67282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1004 04:23:56.116924   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.129060   67282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:23:56.129133   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.141246   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.158759   67282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:23:56.172580   67282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:23:56.192027   67282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:23:56.206698   67282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:23:56.206757   67282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:23:56.223074   67282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:23:56.241061   67282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:23:56.365616   67282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:23:56.474445   67282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:23:56.474519   67282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:23:56.480077   67282 start.go:563] Will wait 60s for crictl version
	I1004 04:23:56.480133   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:23:56.485207   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:23:56.537710   67282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:23:56.537802   67282 ssh_runner.go:195] Run: crio --version
	I1004 04:23:56.571679   67282 ssh_runner.go:195] Run: crio --version
	I1004 04:23:56.605639   67282 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1004 04:23:55.116525   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Start
	I1004 04:23:55.116723   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring networks are active...
	I1004 04:23:55.117665   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring network default is active
	I1004 04:23:55.118079   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Ensuring network mk-default-k8s-diff-port-281471 is active
	I1004 04:23:55.118565   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Getting domain xml...
	I1004 04:23:55.119417   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Creating domain...
	I1004 04:23:56.429715   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting to get IP...
	I1004 04:23:56.430752   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.431261   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.431353   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.431245   68239 retry.go:31] will retry after 200.843618ms: waiting for machine to come up
	I1004 04:23:56.633542   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.633974   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.634003   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.633923   68239 retry.go:31] will retry after 291.906374ms: waiting for machine to come up
	I1004 04:23:56.927325   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.927847   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:56.927880   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:56.927813   68239 retry.go:31] will retry after 374.509137ms: waiting for machine to come up
	I1004 04:23:57.304251   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.304713   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.304738   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:57.304671   68239 retry.go:31] will retry after 583.046975ms: waiting for machine to come up
	I1004 04:23:57.889410   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.889837   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:57.889868   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:57.889795   68239 retry.go:31] will retry after 549.483036ms: waiting for machine to come up
	I1004 04:23:56.606945   67282 main.go:141] libmachine: (old-k8s-version-420062) Calling .GetIP
	I1004 04:23:56.610421   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:56.610952   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:e4:4f", ip: ""} in network mk-old-k8s-version-420062: {Iface:virbr2 ExpiryTime:2024-10-04 05:23:46 +0000 UTC Type:0 Mac:52:54:00:fb:e4:4f Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:old-k8s-version-420062 Clientid:01:52:54:00:fb:e4:4f}
	I1004 04:23:56.610976   67282 main.go:141] libmachine: (old-k8s-version-420062) DBG | domain old-k8s-version-420062 has defined IP address 192.168.50.146 and MAC address 52:54:00:fb:e4:4f in network mk-old-k8s-version-420062
	I1004 04:23:56.611373   67282 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1004 04:23:56.615872   67282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:23:56.629783   67282 kubeadm.go:883] updating cluster {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:23:56.629932   67282 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 04:23:56.629983   67282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:23:56.690260   67282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:23:56.690343   67282 ssh_runner.go:195] Run: which lz4
	I1004 04:23:56.695808   67282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:23:56.701593   67282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:23:56.701623   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1004 04:23:54.156612   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace has status "Ready":"True"
	I1004 04:23:54.156637   66755 pod_ready.go:82] duration metric: took 5.506141622s for pod "coredns-7c65d6cfc9-z7pqn" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:54.156646   66755 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:23:56.164534   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:58.166994   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:23:58.440643   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:58.441080   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:58.441109   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:58.441034   68239 retry.go:31] will retry after 585.437747ms: waiting for machine to come up
	I1004 04:23:59.027951   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.028414   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.028441   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:59.028369   68239 retry.go:31] will retry after 773.32668ms: waiting for machine to come up
	I1004 04:23:59.803329   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.803752   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:23:59.803793   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:23:59.803722   68239 retry.go:31] will retry after 936.396482ms: waiting for machine to come up
	I1004 04:24:00.741805   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:00.742328   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:00.742372   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:00.742262   68239 retry.go:31] will retry after 1.294836266s: waiting for machine to come up
	I1004 04:24:02.038222   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:02.038750   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:02.038785   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:02.038699   68239 retry.go:31] will retry after 2.282660025s: waiting for machine to come up
	I1004 04:23:58.525796   67282 crio.go:462] duration metric: took 1.830039762s to copy over tarball
	I1004 04:23:58.525868   67282 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:24:01.514552   67282 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.98865618s)
	I1004 04:24:01.514585   67282 crio.go:469] duration metric: took 2.988759159s to extract the tarball
	I1004 04:24:01.514595   67282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:24:01.562130   67282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:01.598856   67282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1004 04:24:01.598882   67282 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:24:01.598960   67282 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:01.599035   67282 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.599047   67282 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.599048   67282 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1004 04:24:01.599020   67282 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.599025   67282 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.598967   67282 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.598967   67282 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.600760   67282 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.600772   67282 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1004 04:24:01.600767   67282 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:01.600791   67282 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.600802   67282 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.600804   67282 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.600807   67282 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.600840   67282 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.837527   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.877366   67282 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1004 04:24:01.877413   67282 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.877464   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:01.882328   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.914693   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.934055   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:01.941737   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1004 04:24:01.943929   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:01.944540   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:01.948337   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:01.970977   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1004 04:24:01.995537   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1004 04:24:02.127073   67282 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1004 04:24:02.127097   67282 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1004 04:24:02.127121   67282 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.127121   67282 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.127156   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.127159   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128471   67282 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1004 04:24:02.128532   67282 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.128535   67282 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1004 04:24:02.128560   67282 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.128571   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128595   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128598   67282 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1004 04:24:02.128627   67282 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.128669   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.128730   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1004 04:24:02.128761   67282 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1004 04:24:02.128783   67282 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1004 04:24:02.128815   67282 ssh_runner.go:195] Run: which crictl
	I1004 04:24:02.133675   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.133724   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.141911   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.141950   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.141989   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.142044   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.263733   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.263744   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.263798   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.265990   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.297523   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.297566   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.379282   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1004 04:24:02.379318   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1004 04:24:02.379331   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1004 04:24:02.417271   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1004 04:24:02.454521   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1004 04:24:02.454559   67282 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 04:24:02.496644   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1004 04:24:02.533632   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1004 04:24:02.533690   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1004 04:24:02.533750   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1004 04:24:02.568138   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1004 04:24:02.568153   67282 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1004 04:24:02.911933   67282 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:03.055844   67282 cache_images.go:92] duration metric: took 1.456943316s to LoadCachedImages
	W1004 04:24:03.055959   67282 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1004 04:24:03.055976   67282 kubeadm.go:934] updating node { 192.168.50.146 8443 v1.20.0 crio true true} ...
	I1004 04:24:03.056087   67282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-420062 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:03.056162   67282 ssh_runner.go:195] Run: crio config
	I1004 04:24:03.103752   67282 cni.go:84] Creating CNI manager for ""
	I1004 04:24:03.103792   67282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:03.103805   67282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:03.103826   67282 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.146 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-420062 NodeName:old-k8s-version-420062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1004 04:24:03.103952   67282 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-420062"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:03.104008   67282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1004 04:24:03.114316   67282 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:03.114372   67282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:03.124059   67282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1004 04:24:03.143310   67282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:03.161143   67282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1004 04:24:03.178444   67282 ssh_runner.go:195] Run: grep 192.168.50.146	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:03.182235   67282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:03.195103   67282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:03.317820   67282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:03.334820   67282 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062 for IP: 192.168.50.146
	I1004 04:24:03.334840   67282 certs.go:194] generating shared ca certs ...
	I1004 04:24:03.334855   67282 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:03.335008   67282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:03.335049   67282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:03.335059   67282 certs.go:256] generating profile certs ...
	I1004 04:24:03.335156   67282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.key
	I1004 04:24:03.335212   67282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key.c1f9ed6b
	I1004 04:24:03.335260   67282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key
	I1004 04:24:03.335368   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:03.335394   67282 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:03.335401   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:03.335426   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:03.335451   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:03.335476   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:03.335518   67282 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:03.336260   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:03.373985   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:03.408150   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:03.444219   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:03.493160   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1004 04:24:00.665171   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:02.815874   66755 pod_ready.go:103] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:04.022715   66755 pod_ready.go:93] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.022744   66755 pod_ready.go:82] duration metric: took 9.866089641s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.022756   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.028094   66755 pod_ready.go:93] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.028115   66755 pod_ready.go:82] duration metric: took 5.350911ms for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.028123   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.033106   66755 pod_ready.go:93] pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.033124   66755 pod_ready.go:82] duration metric: took 4.995208ms for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.033132   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9qpgb" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.037388   66755 pod_ready.go:93] pod "kube-proxy-9qpgb" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.037409   66755 pod_ready.go:82] duration metric: took 4.270278ms for pod "kube-proxy-9qpgb" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.037420   66755 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.042717   66755 pod_ready.go:93] pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:04.042737   66755 pod_ready.go:82] duration metric: took 5.30887ms for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.042747   66755 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:04.324259   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:04.324749   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:04.324811   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:04.324726   68239 retry.go:31] will retry after 2.070089599s: waiting for machine to come up
	I1004 04:24:06.396547   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:06.396991   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:06.397015   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:06.396944   68239 retry.go:31] will retry after 3.403718824s: waiting for machine to come up
	I1004 04:24:03.533084   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:24:03.565405   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:03.613938   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:24:03.642711   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:03.674784   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:03.706968   67282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:03.731329   67282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:03.749003   67282 ssh_runner.go:195] Run: openssl version
	I1004 04:24:03.755219   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:03.766499   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.771322   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.771413   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:03.778185   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:03.790581   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:03.802556   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.807312   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.807373   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:03.813595   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:03.825043   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:03.835389   67282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.840004   67282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.840051   67282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:03.847540   67282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:03.862303   67282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:03.868029   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:03.874811   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:03.880797   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:03.886622   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:03.892273   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:03.898129   67282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:03.905775   67282 kubeadm.go:392] StartCluster: {Name:old-k8s-version-420062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.20.0 ClusterName:old-k8s-version-420062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:03.905852   67282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:03.905890   67282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:03.954627   67282 cri.go:89] found id: ""
	I1004 04:24:03.954702   67282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:03.965146   67282 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:03.965170   67282 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:03.965236   67282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:03.975404   67282 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:03.976362   67282 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-420062" does not appear in /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:24:03.976990   67282 kubeconfig.go:62] /home/jenkins/minikube-integration/19546-9647/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-420062" cluster setting kubeconfig missing "old-k8s-version-420062" context setting]
	I1004 04:24:03.977906   67282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:03.979485   67282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:03.989487   67282 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.146
	I1004 04:24:03.989517   67282 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:03.989529   67282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:03.989577   67282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:04.031536   67282 cri.go:89] found id: ""
	I1004 04:24:04.031607   67282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:04.048652   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:04.057813   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:04.057830   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:04.057867   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:24:04.066213   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:04.066252   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:04.074904   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:24:04.083485   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:04.083522   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:04.092314   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:24:04.100528   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:04.100572   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:04.109232   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:24:04.118051   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:04.118091   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:04.127430   67282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:04.137949   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:04.272627   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:04.940435   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.181288   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.268873   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:05.373549   67282 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:05.373653   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:05.874543   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.374154   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.874343   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:07.374744   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:07.874734   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:08.374255   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:06.050700   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:08.548473   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:09.802504   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:09.802912   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | unable to find current IP address of domain default-k8s-diff-port-281471 in network mk-default-k8s-diff-port-281471
	I1004 04:24:09.802937   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | I1004 04:24:09.802870   68239 retry.go:31] will retry after 3.430575602s: waiting for machine to come up
	I1004 04:24:13.236792   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.237230   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Found IP for machine: 192.168.39.201
	I1004 04:24:13.237251   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Reserving static IP address...
	I1004 04:24:13.237268   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has current primary IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.237712   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-281471", mac: "52:54:00:cd:36:92", ip: "192.168.39.201"} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.237745   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Reserved static IP address: 192.168.39.201
	I1004 04:24:13.237765   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | skip adding static IP to network mk-default-k8s-diff-port-281471 - found existing host DHCP lease matching {name: "default-k8s-diff-port-281471", mac: "52:54:00:cd:36:92", ip: "192.168.39.201"}
	I1004 04:24:13.237786   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Getting to WaitForSSH function...
	I1004 04:24:13.237805   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Waiting for SSH to be available...
	I1004 04:24:13.240068   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.240354   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.240384   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.240514   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Using SSH client type: external
	I1004 04:24:13.240540   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa (-rw-------)
	I1004 04:24:13.240577   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:24:13.240594   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | About to run SSH command:
	I1004 04:24:13.240608   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | exit 0
	I1004 04:24:08.874627   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:09.374627   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:09.874278   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.374675   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.873949   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:11.373966   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:11.873775   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:12.373874   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:12.874010   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:13.374575   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:10.550171   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:13.049596   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:14.741098   66293 start.go:364] duration metric: took 53.770546651s to acquireMachinesLock for "no-preload-658545"
	I1004 04:24:14.741156   66293 start.go:96] Skipping create...Using existing machine configuration
	I1004 04:24:14.741164   66293 fix.go:54] fixHost starting: 
	I1004 04:24:14.741565   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:14.741595   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:14.758364   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37947
	I1004 04:24:14.758823   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:14.759356   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:24:14.759383   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:14.759700   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:14.759895   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:14.760077   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:24:14.761849   66293 fix.go:112] recreateIfNeeded on no-preload-658545: state=Stopped err=<nil>
	I1004 04:24:14.761873   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	W1004 04:24:14.762037   66293 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 04:24:14.764123   66293 out.go:177] * Restarting existing kvm2 VM for "no-preload-658545" ...
	I1004 04:24:13.371830   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | SSH cmd err, output: <nil>: 
	I1004 04:24:13.372219   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetConfigRaw
	I1004 04:24:13.372817   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:13.375676   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.376080   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.376116   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.376393   67541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/config.json ...
	I1004 04:24:13.376616   67541 machine.go:93] provisionDockerMachine start ...
	I1004 04:24:13.376638   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:13.376845   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.379413   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.379847   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.379908   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.380015   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.380204   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.380360   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.380493   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.380657   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.380913   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.380988   67541 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:24:13.492488   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:24:13.492528   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.492749   67541 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281471"
	I1004 04:24:13.492768   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.492928   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.495691   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.496003   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.496031   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.496160   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.496368   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.496530   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.496651   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.496785   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.497017   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.497034   67541 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-281471 && echo "default-k8s-diff-port-281471" | sudo tee /etc/hostname
	I1004 04:24:13.627336   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-281471
	
	I1004 04:24:13.627364   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.630757   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.631162   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.631199   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.631486   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:13.631701   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.631874   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:13.632018   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:13.632216   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:13.632431   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:13.632457   67541 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-281471' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-281471/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-281471' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:24:13.758386   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:24:13.758413   67541 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:24:13.758462   67541 buildroot.go:174] setting up certificates
	I1004 04:24:13.758472   67541 provision.go:84] configureAuth start
	I1004 04:24:13.758484   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetMachineName
	I1004 04:24:13.758740   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:13.761590   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.761899   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.761939   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.762068   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:13.764293   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.764644   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:13.764672   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:13.764811   67541 provision.go:143] copyHostCerts
	I1004 04:24:13.764869   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:24:13.764880   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:24:13.764936   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:24:13.765046   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:24:13.765055   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:24:13.765075   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:24:13.765127   67541 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:24:13.765135   67541 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:24:13.765160   67541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:24:13.765235   67541 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-281471 san=[127.0.0.1 192.168.39.201 default-k8s-diff-port-281471 localhost minikube]
	I1004 04:24:14.075640   67541 provision.go:177] copyRemoteCerts
	I1004 04:24:14.075698   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:24:14.075722   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.078293   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.078658   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.078689   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.078827   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.079048   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.079213   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.079348   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.167232   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:24:14.193065   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1004 04:24:14.218112   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:24:14.243281   67541 provision.go:87] duration metric: took 484.783764ms to configureAuth
	I1004 04:24:14.243310   67541 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:24:14.243506   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:14.243593   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.246497   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.246837   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.246885   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.247019   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.247211   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.247384   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.247551   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.247719   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:14.247909   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:14.247923   67541 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:24:14.487651   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:24:14.487675   67541 machine.go:96] duration metric: took 1.11104473s to provisionDockerMachine
	I1004 04:24:14.487686   67541 start.go:293] postStartSetup for "default-k8s-diff-port-281471" (driver="kvm2")
	I1004 04:24:14.487696   67541 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:24:14.487733   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.488084   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:24:14.488114   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.490844   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.491198   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.491229   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.491372   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.491562   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.491700   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.491815   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.579398   67541 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:24:14.584068   67541 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:24:14.584098   67541 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:24:14.584179   67541 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:24:14.584274   67541 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:24:14.584379   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:24:14.594853   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:14.621833   67541 start.go:296] duration metric: took 134.135256ms for postStartSetup
	I1004 04:24:14.621874   67541 fix.go:56] duration metric: took 19.532563115s for fixHost
	I1004 04:24:14.621895   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.625077   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.625418   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.625443   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.625678   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.625900   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.626059   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.626205   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.626373   67541 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:14.626589   67541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1004 04:24:14.626603   67541 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:24:14.740932   67541 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015854.697826512
	
	I1004 04:24:14.740950   67541 fix.go:216] guest clock: 1728015854.697826512
	I1004 04:24:14.740957   67541 fix.go:229] Guest: 2024-10-04 04:24:14.697826512 +0000 UTC Remote: 2024-10-04 04:24:14.621877739 +0000 UTC m=+171.379203860 (delta=75.948773ms)
	I1004 04:24:14.741000   67541 fix.go:200] guest clock delta is within tolerance: 75.948773ms
	I1004 04:24:14.741007   67541 start.go:83] releasing machines lock for "default-k8s-diff-port-281471", held for 19.651737082s
	I1004 04:24:14.741031   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.741291   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:14.744142   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.744498   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.744518   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.744720   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745368   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745559   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:14.745665   67541 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:24:14.745706   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.745802   67541 ssh_runner.go:195] Run: cat /version.json
	I1004 04:24:14.745843   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:14.748443   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748779   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.748813   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748838   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.748927   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.749064   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.749245   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:14.749267   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:14.749283   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.749441   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:14.749481   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.749589   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:14.749725   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:14.749856   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:14.833632   67541 ssh_runner.go:195] Run: systemctl --version
	I1004 04:24:14.863812   67541 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:24:15.016823   67541 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:24:15.023613   67541 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:24:15.023696   67541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:24:15.042546   67541 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:24:15.042576   67541 start.go:495] detecting cgroup driver to use...
	I1004 04:24:15.042645   67541 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:24:15.060267   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:24:15.076088   67541 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:24:15.076155   67541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:24:15.091741   67541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:24:15.107153   67541 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:24:15.230591   67541 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:24:15.381704   67541 docker.go:233] disabling docker service ...
	I1004 04:24:15.381776   67541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:24:15.397616   67541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:24:15.412350   67541 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:24:15.569525   67541 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:24:15.690120   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:24:15.705348   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:24:15.728253   67541 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:24:15.728334   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.739875   67541 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:24:15.739951   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.751997   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.765898   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.777917   67541 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:24:15.791235   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.802390   67541 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.825385   67541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:15.837278   67541 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:24:15.848791   67541 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:24:15.848864   67541 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:24:15.870774   67541 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:24:15.883544   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:15.997406   67541 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:24:16.095391   67541 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:24:16.095508   67541 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:24:16.102427   67541 start.go:563] Will wait 60s for crictl version
	I1004 04:24:16.102510   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:24:16.106958   67541 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:24:16.150721   67541 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:24:16.150824   67541 ssh_runner.go:195] Run: crio --version
	I1004 04:24:16.181714   67541 ssh_runner.go:195] Run: crio --version
	I1004 04:24:16.214202   67541 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:24:16.215583   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetIP
	I1004 04:24:16.218418   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:16.218800   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:16.218831   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:16.219002   67541 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 04:24:16.223382   67541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:16.236443   67541 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:24:16.236565   67541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:24:16.236652   67541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:16.279095   67541 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:24:16.279158   67541 ssh_runner.go:195] Run: which lz4
	I1004 04:24:16.283684   67541 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1004 04:24:16.288436   67541 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 04:24:16.288472   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1004 04:24:17.853549   67541 crio.go:462] duration metric: took 1.569889689s to copy over tarball
	I1004 04:24:17.853631   67541 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 04:24:14.765651   66293 main.go:141] libmachine: (no-preload-658545) Calling .Start
	I1004 04:24:14.765886   66293 main.go:141] libmachine: (no-preload-658545) Ensuring networks are active...
	I1004 04:24:14.766761   66293 main.go:141] libmachine: (no-preload-658545) Ensuring network default is active
	I1004 04:24:14.767179   66293 main.go:141] libmachine: (no-preload-658545) Ensuring network mk-no-preload-658545 is active
	I1004 04:24:14.767706   66293 main.go:141] libmachine: (no-preload-658545) Getting domain xml...
	I1004 04:24:14.768478   66293 main.go:141] libmachine: (no-preload-658545) Creating domain...
	I1004 04:24:16.087556   66293 main.go:141] libmachine: (no-preload-658545) Waiting to get IP...
	I1004 04:24:16.088628   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.089032   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.089093   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.089008   68422 retry.go:31] will retry after 276.442313ms: waiting for machine to come up
	I1004 04:24:16.367448   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.367923   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.367953   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.367894   68422 retry.go:31] will retry after 291.504157ms: waiting for machine to come up
	I1004 04:24:16.661396   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:16.661958   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:16.662009   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:16.661932   68422 retry.go:31] will retry after 378.34293ms: waiting for machine to come up
	I1004 04:24:17.041431   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:17.041942   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:17.041970   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:17.041916   68422 retry.go:31] will retry after 553.613866ms: waiting for machine to come up
	I1004 04:24:17.596745   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:17.597294   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:17.597327   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:17.597259   68422 retry.go:31] will retry after 611.098402ms: waiting for machine to come up
	I1004 04:24:18.210083   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:18.210569   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:18.210592   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:18.210530   68422 retry.go:31] will retry after 691.8822ms: waiting for machine to come up
	I1004 04:24:13.873857   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:14.374241   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:14.873863   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.374063   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.873950   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:16.373819   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:16.874290   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:17.374357   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:17.874163   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:18.374160   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:15.049926   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:17.051060   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:20.132987   67541 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.279324141s)
	I1004 04:24:20.133023   67541 crio.go:469] duration metric: took 2.279442603s to extract the tarball
	I1004 04:24:20.133033   67541 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 04:24:20.171805   67541 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:20.217431   67541 crio.go:514] all images are preloaded for cri-o runtime.
	I1004 04:24:20.217458   67541 cache_images.go:84] Images are preloaded, skipping loading
	I1004 04:24:20.217468   67541 kubeadm.go:934] updating node { 192.168.39.201 8444 v1.31.1 crio true true} ...
	I1004 04:24:20.217586   67541 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-281471 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:20.217687   67541 ssh_runner.go:195] Run: crio config
	I1004 04:24:20.269529   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:24:20.269559   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:20.269569   67541 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:20.269604   67541 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.201 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-281471 NodeName:default-k8s-diff-port-281471 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:24:20.269822   67541 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.201
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-281471"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:20.269913   67541 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:24:20.281286   67541 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:20.281368   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:20.292186   67541 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1004 04:24:20.310972   67541 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:20.329420   67541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1004 04:24:20.348358   67541 ssh_runner.go:195] Run: grep 192.168.39.201	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:20.352641   67541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:20.366317   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:20.499648   67541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:20.518930   67541 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471 for IP: 192.168.39.201
	I1004 04:24:20.518954   67541 certs.go:194] generating shared ca certs ...
	I1004 04:24:20.518971   67541 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:20.519121   67541 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:20.519167   67541 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:20.519177   67541 certs.go:256] generating profile certs ...
	I1004 04:24:20.519279   67541 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/client.key
	I1004 04:24:20.519347   67541 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.key.6cd63ef9
	I1004 04:24:20.519381   67541 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.key
	I1004 04:24:20.519492   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:20.519527   67541 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:20.519539   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:20.519570   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:20.519614   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:20.519643   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:20.519710   67541 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:20.520418   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:20.566110   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:20.613646   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:20.648416   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:20.678840   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1004 04:24:20.722021   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 04:24:20.749381   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:20.776777   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/default-k8s-diff-port-281471/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 04:24:20.803998   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:20.833182   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:20.859600   67541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:20.887732   67541 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:20.910566   67541 ssh_runner.go:195] Run: openssl version
	I1004 04:24:20.917151   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:20.930475   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.935819   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.935895   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:20.942607   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:20.954950   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:20.967348   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.972468   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.972543   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:20.979061   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:20.992010   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:21.008370   67541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.015101   67541 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.015161   67541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:21.023491   67541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:21.035766   67541 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:21.041416   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:21.048405   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:21.055468   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:21.062228   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:21.068967   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:21.075984   67541 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:21.086088   67541 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-281471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-281471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:21.086196   67541 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:21.086253   67541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:21.131997   67541 cri.go:89] found id: ""
	I1004 04:24:21.132061   67541 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:21.145219   67541 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:21.145237   67541 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:21.145289   67541 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:21.157041   67541 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:21.158724   67541 kubeconfig.go:125] found "default-k8s-diff-port-281471" server: "https://192.168.39.201:8444"
	I1004 04:24:21.162295   67541 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:21.173771   67541 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.201
	I1004 04:24:21.173806   67541 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:21.173820   67541 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:21.173891   67541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:21.215149   67541 cri.go:89] found id: ""
	I1004 04:24:21.215216   67541 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:21.234432   67541 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:21.245688   67541 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:21.245707   67541 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:21.245758   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1004 04:24:21.256101   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:21.256168   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:21.267319   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1004 04:24:21.279995   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:21.280050   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:21.292588   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1004 04:24:21.304478   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:21.304545   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:21.317012   67541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1004 04:24:21.328769   67541 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:21.328853   67541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:21.341597   67541 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:21.353901   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:21.483705   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.340208   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.582628   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.662202   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:22.773206   67541 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:22.773327   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.274151   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:18.903981   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:18.904373   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:18.904398   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:18.904331   68422 retry.go:31] will retry after 1.022635653s: waiting for machine to come up
	I1004 04:24:19.929163   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:19.929707   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:19.929749   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:19.929656   68422 retry.go:31] will retry after 939.130061ms: waiting for machine to come up
	I1004 04:24:20.870067   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:20.870578   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:20.870606   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:20.870521   68422 retry.go:31] will retry after 1.673919202s: waiting for machine to come up
	I1004 04:24:22.546229   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:22.546621   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:22.546650   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:22.546569   68422 retry.go:31] will retry after 1.962556159s: waiting for machine to come up
	I1004 04:24:18.874214   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.374670   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.874355   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:20.374335   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:20.874299   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:21.374492   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:21.874293   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:22.373890   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:22.874622   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.374639   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:19.552128   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:22.050844   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:24.051071   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:23.774477   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:23.807536   67541 api_server.go:72] duration metric: took 1.034328656s to wait for apiserver process to appear ...
	I1004 04:24:23.807569   67541 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:24:23.807593   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.646266   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:26.646299   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:26.646319   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.696828   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:26.696856   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:26.808107   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:26.819887   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:26.819947   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:27.308535   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:27.317320   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:27.317372   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:27.807868   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:27.817762   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:27.817805   67541 api_server.go:103] status: https://192.168.39.201:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:28.307660   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:24:28.313515   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 200:
	ok
	I1004 04:24:28.320539   67541 api_server.go:141] control plane version: v1.31.1
	I1004 04:24:28.320568   67541 api_server.go:131] duration metric: took 4.512991081s to wait for apiserver health ...
	I1004 04:24:28.320578   67541 cni.go:84] Creating CNI manager for ""
	I1004 04:24:28.320586   67541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:28.322138   67541 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:24:24.511356   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:24.511886   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:24.511917   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:24.511843   68422 retry.go:31] will retry after 2.5950382s: waiting for machine to come up
	I1004 04:24:27.109018   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:27.109474   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:27.109503   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:27.109451   68422 retry.go:31] will retry after 2.984182925s: waiting for machine to come up
	I1004 04:24:23.873822   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:24.373911   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:24.874756   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:25.374035   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:25.873874   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.374503   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.874371   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:27.374335   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:27.873941   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:28.373861   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:26.550974   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:28.552007   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:28.323513   67541 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:24:28.336556   67541 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:24:28.358371   67541 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:24:28.373163   67541 system_pods.go:59] 8 kube-system pods found
	I1004 04:24:28.373204   67541 system_pods.go:61] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:24:28.373217   67541 system_pods.go:61] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:24:28.373228   67541 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:24:28.373239   67541 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:24:28.373246   67541 system_pods.go:61] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:24:28.373256   67541 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:24:28.373267   67541 system_pods.go:61] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:24:28.373273   67541 system_pods.go:61] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:24:28.373283   67541 system_pods.go:74] duration metric: took 14.891267ms to wait for pod list to return data ...
	I1004 04:24:28.373294   67541 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:24:28.378226   67541 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:24:28.378269   67541 node_conditions.go:123] node cpu capacity is 2
	I1004 04:24:28.378285   67541 node_conditions.go:105] duration metric: took 4.985167ms to run NodePressure ...
	I1004 04:24:28.378309   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:28.649369   67541 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:24:28.654563   67541 kubeadm.go:739] kubelet initialised
	I1004 04:24:28.654584   67541 kubeadm.go:740] duration metric: took 5.188927ms waiting for restarted kubelet to initialise ...
	I1004 04:24:28.654591   67541 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:28.662152   67541 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.668248   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.668278   67541 pod_ready.go:82] duration metric: took 6.099746ms for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.668287   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.668294   67541 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.675790   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.675811   67541 pod_ready.go:82] duration metric: took 7.509617ms for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.675823   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.675830   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.683763   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.683811   67541 pod_ready.go:82] duration metric: took 7.972006ms for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.683830   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.683839   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:28.761974   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.762006   67541 pod_ready.go:82] duration metric: took 78.154275ms for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:28.762021   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:28.762030   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.162590   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-proxy-4nnld" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.162623   67541 pod_ready.go:82] duration metric: took 400.583388ms for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.162634   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-proxy-4nnld" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.162643   67541 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.562557   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.562584   67541 pod_ready.go:82] duration metric: took 399.929497ms for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.562595   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.562602   67541 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:29.963502   67541 pod_ready.go:98] node "default-k8s-diff-port-281471" hosting pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.963528   67541 pod_ready.go:82] duration metric: took 400.919452ms for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	E1004 04:24:29.963539   67541 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-281471" hosting pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:29.963547   67541 pod_ready.go:39] duration metric: took 1.308947485s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:29.963561   67541 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:24:29.976241   67541 ops.go:34] apiserver oom_adj: -16
	I1004 04:24:29.976268   67541 kubeadm.go:597] duration metric: took 8.831025549s to restartPrimaryControlPlane
	I1004 04:24:29.976278   67541 kubeadm.go:394] duration metric: took 8.890203906s to StartCluster
	I1004 04:24:29.976295   67541 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:29.976372   67541 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:24:29.977898   67541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:29.978168   67541 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.201 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:24:29.978222   67541 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:24:29.978306   67541 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978330   67541 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-281471"
	W1004 04:24:29.978341   67541 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:24:29.978329   67541 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978353   67541 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-281471"
	I1004 04:24:29.978369   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:29.978367   67541 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-281471"
	I1004 04:24:29.978377   67541 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-281471"
	W1004 04:24:29.978387   67541 addons.go:243] addon metrics-server should already be in state true
	I1004 04:24:29.978413   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:29.978464   67541 config.go:182] Loaded profile config "default-k8s-diff-port-281471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:29.978731   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978783   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.978818   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978871   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.978839   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:29.978970   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:29.979903   67541 out.go:177] * Verifying Kubernetes components...
	I1004 04:24:29.981432   67541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:29.994332   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42631
	I1004 04:24:29.994917   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:29.995488   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:29.995503   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:29.995865   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:29.996675   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:29.999180   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I1004 04:24:29.999220   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I1004 04:24:29.999564   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:29.999651   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.000157   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.000182   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.000262   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.000281   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.000379   67541 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-281471"
	W1004 04:24:30.000398   67541 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:24:30.000429   67541 host.go:66] Checking if "default-k8s-diff-port-281471" exists ...
	I1004 04:24:30.000613   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.000646   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.000790   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.000812   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.001163   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.001215   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.001259   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.001307   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.016576   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I1004 04:24:30.016650   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41997
	I1004 04:24:30.016796   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44507
	I1004 04:24:30.016993   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017079   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017138   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.017536   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017557   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017548   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017584   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017537   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.017621   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.017929   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.017931   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.017970   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.018100   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.018152   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.018559   67541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:24:30.018600   67541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:24:30.020021   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.020637   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.022016   67541 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:30.022018   67541 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:24:30.023395   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:24:30.023417   67541 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:24:30.023444   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.023489   67541 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:24:30.023506   67541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:24:30.023528   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.027678   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028005   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028129   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.028180   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028538   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.028552   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.028560   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.028724   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.028750   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.028881   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.028911   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.029013   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.029055   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.029124   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.037309   67541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46395
	I1004 04:24:30.037846   67541 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:24:30.038328   67541 main.go:141] libmachine: Using API Version  1
	I1004 04:24:30.038355   67541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:24:30.038683   67541 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:24:30.038850   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetState
	I1004 04:24:30.040366   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .DriverName
	I1004 04:24:30.040572   67541 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:24:30.040586   67541 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:24:30.040602   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHHostname
	I1004 04:24:30.043618   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.044070   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:36:92", ip: ""} in network mk-default-k8s-diff-port-281471: {Iface:virbr4 ExpiryTime:2024-10-04 05:24:07 +0000 UTC Type:0 Mac:52:54:00:cd:36:92 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:default-k8s-diff-port-281471 Clientid:01:52:54:00:cd:36:92}
	I1004 04:24:30.044092   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | domain default-k8s-diff-port-281471 has defined IP address 192.168.39.201 and MAC address 52:54:00:cd:36:92 in network mk-default-k8s-diff-port-281471
	I1004 04:24:30.044232   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHPort
	I1004 04:24:30.044413   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHKeyPath
	I1004 04:24:30.044541   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .GetSSHUsername
	I1004 04:24:30.044687   67541 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/default-k8s-diff-port-281471/id_rsa Username:docker}
	I1004 04:24:30.194435   67541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:30.223577   67541 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-281471" to be "Ready" ...
	I1004 04:24:30.277458   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:24:30.316201   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:24:30.316227   67541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:24:30.333635   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:24:30.346511   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:24:30.346549   67541 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:24:30.405197   67541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:24:30.405219   67541 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:24:30.465174   67541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:24:31.307064   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307137   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307430   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.307442   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.307469   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.307546   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307574   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307691   67541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030198983s)
	I1004 04:24:31.307733   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.307747   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.307789   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.307811   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.309264   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.309275   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.309281   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.309291   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.309299   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.309538   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.309568   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.309583   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.315635   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.315653   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.315917   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.315933   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.411630   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.411658   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.411934   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.411951   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.411965   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.411983   67541 main.go:141] libmachine: Making call to close driver server
	I1004 04:24:31.411997   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) Calling .Close
	I1004 04:24:31.412221   67541 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:24:31.412261   67541 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:24:31.412274   67541 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-281471"
	I1004 04:24:31.412283   67541 main.go:141] libmachine: (default-k8s-diff-port-281471) DBG | Closing plugin on server side
	I1004 04:24:31.414267   67541 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1004 04:24:31.415607   67541 addons.go:510] duration metric: took 1.43738386s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1004 04:24:32.227563   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:30.095611   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:30.096032   66293 main.go:141] libmachine: (no-preload-658545) DBG | unable to find current IP address of domain no-preload-658545 in network mk-no-preload-658545
	I1004 04:24:30.096061   66293 main.go:141] libmachine: (no-preload-658545) DBG | I1004 04:24:30.095981   68422 retry.go:31] will retry after 2.833386023s: waiting for machine to come up
	I1004 04:24:32.933027   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.933509   66293 main.go:141] libmachine: (no-preload-658545) Found IP for machine: 192.168.72.54
	I1004 04:24:32.933538   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has current primary IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.933544   66293 main.go:141] libmachine: (no-preload-658545) Reserving static IP address...
	I1004 04:24:32.933950   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "no-preload-658545", mac: "52:54:00:f5:6c:11", ip: "192.168.72.54"} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:32.933970   66293 main.go:141] libmachine: (no-preload-658545) Reserved static IP address: 192.168.72.54
	I1004 04:24:32.933988   66293 main.go:141] libmachine: (no-preload-658545) DBG | skip adding static IP to network mk-no-preload-658545 - found existing host DHCP lease matching {name: "no-preload-658545", mac: "52:54:00:f5:6c:11", ip: "192.168.72.54"}
	I1004 04:24:32.934002   66293 main.go:141] libmachine: (no-preload-658545) DBG | Getting to WaitForSSH function...
	I1004 04:24:32.934016   66293 main.go:141] libmachine: (no-preload-658545) Waiting for SSH to be available...
	I1004 04:24:32.936089   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.936440   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:32.936471   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:32.936572   66293 main.go:141] libmachine: (no-preload-658545) DBG | Using SSH client type: external
	I1004 04:24:32.936599   66293 main.go:141] libmachine: (no-preload-658545) DBG | Using SSH private key: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa (-rw-------)
	I1004 04:24:32.936637   66293 main.go:141] libmachine: (no-preload-658545) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 04:24:32.936650   66293 main.go:141] libmachine: (no-preload-658545) DBG | About to run SSH command:
	I1004 04:24:32.936661   66293 main.go:141] libmachine: (no-preload-658545) DBG | exit 0
	I1004 04:24:33.064432   66293 main.go:141] libmachine: (no-preload-658545) DBG | SSH cmd err, output: <nil>: 
	I1004 04:24:33.064791   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetConfigRaw
	I1004 04:24:33.065494   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:33.068038   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.068302   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.068325   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.068580   66293 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/config.json ...
	I1004 04:24:33.068837   66293 machine.go:93] provisionDockerMachine start ...
	I1004 04:24:33.068858   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:33.069072   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.071425   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.071748   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.071819   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.071946   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.072166   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.072332   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.072429   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.072587   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.072799   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.072814   66293 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 04:24:33.184623   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1004 04:24:33.184656   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.184912   66293 buildroot.go:166] provisioning hostname "no-preload-658545"
	I1004 04:24:33.184946   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.185126   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.188804   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.189189   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.189222   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.189419   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.189664   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.189839   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.190002   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.190128   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.190300   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.190313   66293 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-658545 && echo "no-preload-658545" | sudo tee /etc/hostname
	I1004 04:24:33.316349   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-658545
	
	I1004 04:24:33.316381   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.319460   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.319908   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.319945   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.320110   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.320301   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.320475   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.320628   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.320811   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.321031   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.321058   66293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-658545' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-658545/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-658545' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 04:24:28.874265   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:29.374364   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:29.874581   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:30.373909   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:30.874089   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.374708   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.874696   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:32.374061   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:32.874233   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:33.374290   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:31.050105   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:33.549870   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:33.444185   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 04:24:33.444221   66293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19546-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-9647/.minikube}
	I1004 04:24:33.444246   66293 buildroot.go:174] setting up certificates
	I1004 04:24:33.444257   66293 provision.go:84] configureAuth start
	I1004 04:24:33.444273   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetMachineName
	I1004 04:24:33.444569   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:33.447726   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.448137   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.448168   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.448332   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.450903   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.451311   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.451340   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.451479   66293 provision.go:143] copyHostCerts
	I1004 04:24:33.451559   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem, removing ...
	I1004 04:24:33.451571   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem
	I1004 04:24:33.451638   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/ca.pem (1082 bytes)
	I1004 04:24:33.451748   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem, removing ...
	I1004 04:24:33.451763   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem
	I1004 04:24:33.451818   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/cert.pem (1123 bytes)
	I1004 04:24:33.451897   66293 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem, removing ...
	I1004 04:24:33.451906   66293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem
	I1004 04:24:33.451931   66293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-9647/.minikube/key.pem (1675 bytes)
	I1004 04:24:33.451992   66293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem org=jenkins.no-preload-658545 san=[127.0.0.1 192.168.72.54 localhost minikube no-preload-658545]
	I1004 04:24:33.577106   66293 provision.go:177] copyRemoteCerts
	I1004 04:24:33.577160   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 04:24:33.577183   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.579990   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.580330   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.580359   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.580496   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.580672   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.580810   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.580937   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:33.671123   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1004 04:24:33.697805   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1004 04:24:33.725408   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 04:24:33.751285   66293 provision.go:87] duration metric: took 307.010531ms to configureAuth
	I1004 04:24:33.751315   66293 buildroot.go:189] setting minikube options for container-runtime
	I1004 04:24:33.751553   66293 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:24:33.751651   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.754476   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.754896   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:33.754938   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:33.755087   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:33.755282   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.755450   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:33.755592   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:33.755723   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:33.755969   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:33.755987   66293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 04:24:33.996596   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 04:24:33.996625   66293 machine.go:96] duration metric: took 927.772762ms to provisionDockerMachine
	I1004 04:24:33.996636   66293 start.go:293] postStartSetup for "no-preload-658545" (driver="kvm2")
	I1004 04:24:33.996645   66293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 04:24:33.996662   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:33.996958   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 04:24:33.996981   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:33.999632   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.000082   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.000111   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.000324   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.000537   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.000733   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.000924   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.089338   66293 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 04:24:34.094278   66293 info.go:137] Remote host: Buildroot 2023.02.9
	I1004 04:24:34.094303   66293 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/addons for local assets ...
	I1004 04:24:34.094377   66293 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-9647/.minikube/files for local assets ...
	I1004 04:24:34.094468   66293 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem -> 168792.pem in /etc/ssl/certs
	I1004 04:24:34.094597   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 04:24:34.105335   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:34.134191   66293 start.go:296] duration metric: took 137.541908ms for postStartSetup
	I1004 04:24:34.134243   66293 fix.go:56] duration metric: took 19.393079344s for fixHost
	I1004 04:24:34.134269   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.137227   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.137599   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.137638   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.137779   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.137978   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.138156   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.138289   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.138459   66293 main.go:141] libmachine: Using SSH client type: native
	I1004 04:24:34.138652   66293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1004 04:24:34.138663   66293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 04:24:34.250671   66293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728015874.218795126
	
	I1004 04:24:34.250699   66293 fix.go:216] guest clock: 1728015874.218795126
	I1004 04:24:34.250709   66293 fix.go:229] Guest: 2024-10-04 04:24:34.218795126 +0000 UTC Remote: 2024-10-04 04:24:34.134249208 +0000 UTC m=+355.755571497 (delta=84.545918ms)
	I1004 04:24:34.250735   66293 fix.go:200] guest clock delta is within tolerance: 84.545918ms
	I1004 04:24:34.250742   66293 start.go:83] releasing machines lock for "no-preload-658545", held for 19.509615446s
	I1004 04:24:34.250763   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.250965   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:34.254332   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.254720   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.254746   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.254982   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255550   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255745   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:24:34.255843   66293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 04:24:34.255907   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.255973   66293 ssh_runner.go:195] Run: cat /version.json
	I1004 04:24:34.255996   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:24:34.258802   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259036   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259118   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.259143   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259309   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.259487   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.259538   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:34.259563   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:34.259633   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.259752   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:24:34.259845   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.259891   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:24:34.260042   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:24:34.260180   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:24:34.362345   66293 ssh_runner.go:195] Run: systemctl --version
	I1004 04:24:34.368641   66293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 04:24:34.527679   66293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 04:24:34.534212   66293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 04:24:34.534291   66293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 04:24:34.553539   66293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 04:24:34.553570   66293 start.go:495] detecting cgroup driver to use...
	I1004 04:24:34.553638   66293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 04:24:34.573489   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 04:24:34.588220   66293 docker.go:217] disabling cri-docker service (if available) ...
	I1004 04:24:34.588281   66293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 04:24:34.606014   66293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 04:24:34.621246   66293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 04:24:34.749423   66293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 04:24:34.915880   66293 docker.go:233] disabling docker service ...
	I1004 04:24:34.915960   66293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 04:24:34.936625   66293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 04:24:34.951534   66293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 04:24:35.089398   66293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 04:24:35.225269   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 04:24:35.241006   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 04:24:35.261586   66293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1004 04:24:35.261651   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.273501   66293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 04:24:35.273571   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.285392   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.296475   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.307774   66293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 04:24:35.319241   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.330361   66293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.349013   66293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 04:24:35.360603   66293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 04:24:35.371516   66293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 04:24:35.371581   66293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 04:24:35.387209   66293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 04:24:35.398144   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:35.528196   66293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 04:24:35.629120   66293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 04:24:35.629198   66293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 04:24:35.634243   66293 start.go:563] Will wait 60s for crictl version
	I1004 04:24:35.634307   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:35.638372   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 04:24:35.678659   66293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1004 04:24:35.678763   66293 ssh_runner.go:195] Run: crio --version
	I1004 04:24:35.715285   66293 ssh_runner.go:195] Run: crio --version
	I1004 04:24:35.751571   66293 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1004 04:24:34.228500   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:36.727080   67541 node_ready.go:53] node "default-k8s-diff-port-281471" has status "Ready":"False"
	I1004 04:24:37.228706   67541 node_ready.go:49] node "default-k8s-diff-port-281471" has status "Ready":"True"
	I1004 04:24:37.228745   67541 node_ready.go:38] duration metric: took 7.005123712s for node "default-k8s-diff-port-281471" to be "Ready" ...
	I1004 04:24:37.228760   67541 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:24:37.235256   67541 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:35.752737   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetIP
	I1004 04:24:35.755375   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:35.755763   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:24:35.755818   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:24:35.756063   66293 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1004 04:24:35.760601   66293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:35.773870   66293 kubeadm.go:883] updating cluster {Name:no-preload-658545 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 04:24:35.773970   66293 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 04:24:35.774001   66293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 04:24:35.813619   66293 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1004 04:24:35.813650   66293 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 04:24:35.813736   66293 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:35.813756   66293 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.813785   66293 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1004 04:24:35.813796   66293 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.813877   66293 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:35.813740   66293 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.813758   66293 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.813771   66293 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.815271   66293 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:35.815277   66293 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1004 04:24:35.815292   66293 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.815271   66293 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.815276   66293 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:35.815353   66293 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.815358   66293 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.815402   66293 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.956470   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:35.963066   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:35.965110   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:35.970080   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:35.972477   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:35.988253   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.013802   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1004 04:24:36.063322   66293 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1004 04:24:36.063364   66293 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.063405   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214786   66293 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1004 04:24:36.214827   66293 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.214867   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214928   66293 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1004 04:24:36.214961   66293 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1004 04:24:36.214995   66293 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.215023   66293 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1004 04:24:36.215043   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.214965   66293 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.215081   66293 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1004 04:24:36.215047   66293 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.215100   66293 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.215110   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.215139   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.215147   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:36.274103   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.274103   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.274185   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.274292   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.274329   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.274343   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.392523   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.405236   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.405257   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.408799   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.408857   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.408860   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.511001   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1004 04:24:36.568598   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1004 04:24:36.568658   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1004 04:24:36.568720   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1004 04:24:36.568929   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1004 04:24:36.569021   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1004 04:24:36.599594   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1004 04:24:36.599733   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.696242   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1004 04:24:36.696294   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1004 04:24:36.696336   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1004 04:24:36.696363   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:36.696390   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:36.696399   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:36.696401   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1004 04:24:36.696449   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1004 04:24:36.696507   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:36.696521   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:36.696508   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1004 04:24:36.696563   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.696613   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1004 04:24:36.701522   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1004 04:24:37.132809   66293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:33.874344   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:34.374158   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:34.873848   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:35.373944   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:35.874697   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.373831   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.874231   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:37.374723   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:37.873861   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:38.374206   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:36.050420   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:38.051653   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:39.242026   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:41.244977   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:39.289977   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.593422519s)
	I1004 04:24:39.290020   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1004 04:24:39.290087   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.593446646s)
	I1004 04:24:39.290114   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1004 04:24:39.290136   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:39.290158   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.593739386s)
	I1004 04:24:39.290175   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1004 04:24:39.290097   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.593563637s)
	I1004 04:24:39.290203   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.593795645s)
	I1004 04:24:39.290208   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1004 04:24:39.290213   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1004 04:24:39.290213   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1004 04:24:39.290265   66293 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.157417466s)
	I1004 04:24:39.290314   66293 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1004 04:24:39.290348   66293 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:39.290392   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:24:40.750955   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.460708297s)
	I1004 04:24:40.751065   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1004 04:24:40.751102   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:40.750969   66293 ssh_runner.go:235] Completed: which crictl: (1.460561899s)
	I1004 04:24:40.751159   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1004 04:24:40.751190   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:43.031349   66293 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.280136047s)
	I1004 04:24:43.031395   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.280209115s)
	I1004 04:24:43.031566   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1004 04:24:43.031493   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:43.031600   66293 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:43.031641   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1004 04:24:43.084191   66293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:24:38.873705   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:39.374361   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:39.874144   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.373793   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.873796   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:41.374744   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:41.874442   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:42.374561   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:42.874638   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:43.374677   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:40.548818   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:42.550744   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:43.742554   67541 pod_ready.go:103] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:44.244427   67541 pod_ready.go:93] pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.244453   67541 pod_ready.go:82] duration metric: took 7.009169057s for pod "coredns-7c65d6cfc9-wz6rd" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.244463   67541 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.250595   67541 pod_ready.go:93] pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.250617   67541 pod_ready.go:82] duration metric: took 6.147481ms for pod "etcd-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.250625   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.256537   67541 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.256570   67541 pod_ready.go:82] duration metric: took 5.936641ms for pod "kube-apiserver-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.256583   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.262681   67541 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.262707   67541 pod_ready.go:82] duration metric: took 6.115804ms for pod "kube-controller-manager-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.262721   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.271089   67541 pod_ready.go:93] pod "kube-proxy-4nnld" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.271124   67541 pod_ready.go:82] duration metric: took 8.394207ms for pod "kube-proxy-4nnld" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.271138   67541 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.640124   67541 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace has status "Ready":"True"
	I1004 04:24:44.640158   67541 pod_ready.go:82] duration metric: took 369.009816ms for pod "kube-scheduler-default-k8s-diff-port-281471" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:44.640172   67541 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	I1004 04:24:46.647420   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:45.132971   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.101305613s)
	I1004 04:24:45.133043   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1004 04:24:45.133071   66293 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.048844025s)
	I1004 04:24:45.133079   66293 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:45.133110   66293 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1004 04:24:45.133135   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1004 04:24:45.133179   66293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:47.228047   66293 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.094844592s)
	I1004 04:24:47.228087   66293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1004 04:24:47.228089   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.0949275s)
	I1004 04:24:47.228119   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1004 04:24:47.228154   66293 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:47.228214   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1004 04:24:43.874583   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:44.374117   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:44.874398   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.374755   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.874039   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:46.374598   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:46.874446   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:47.374384   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:47.874596   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:48.374021   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:45.049760   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:47.551861   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:48.647700   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:50.648288   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:52.649288   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:50.627043   66293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.398805191s)
	I1004 04:24:50.627085   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1004 04:24:50.627122   66293 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:50.627191   66293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1004 04:24:51.282056   66293 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19546-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1004 04:24:51.282099   66293 cache_images.go:123] Successfully loaded all cached images
	I1004 04:24:51.282104   66293 cache_images.go:92] duration metric: took 15.468441268s to LoadCachedImages
	I1004 04:24:51.282116   66293 kubeadm.go:934] updating node { 192.168.72.54 8443 v1.31.1 crio true true} ...
	I1004 04:24:51.282243   66293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-658545 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 04:24:51.282321   66293 ssh_runner.go:195] Run: crio config
	I1004 04:24:51.333133   66293 cni.go:84] Creating CNI manager for ""
	I1004 04:24:51.333162   66293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:51.333173   66293 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 04:24:51.333201   66293 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-658545 NodeName:no-preload-658545 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 04:24:51.333361   66293 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-658545"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 04:24:51.333419   66293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 04:24:51.344694   66293 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 04:24:51.344757   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 04:24:51.354990   66293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1004 04:24:51.372572   66293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 04:24:51.394129   66293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1004 04:24:51.412865   66293 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I1004 04:24:51.416985   66293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 04:24:51.430835   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:24:51.559349   66293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:24:51.579093   66293 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545 for IP: 192.168.72.54
	I1004 04:24:51.579120   66293 certs.go:194] generating shared ca certs ...
	I1004 04:24:51.579140   66293 certs.go:226] acquiring lock for ca certs: {Name:mka73703c3246b6a7cc11e262d5e935d8e6515b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:24:51.579318   66293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key
	I1004 04:24:51.579378   66293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key
	I1004 04:24:51.579391   66293 certs.go:256] generating profile certs ...
	I1004 04:24:51.579494   66293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/client.key
	I1004 04:24:51.579588   66293 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.key.10ceac04
	I1004 04:24:51.579648   66293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.key
	I1004 04:24:51.579808   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem (1338 bytes)
	W1004 04:24:51.579849   66293 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879_empty.pem, impossibly tiny 0 bytes
	I1004 04:24:51.579861   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 04:24:51.579891   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/ca.pem (1082 bytes)
	I1004 04:24:51.579926   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/cert.pem (1123 bytes)
	I1004 04:24:51.579961   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/certs/key.pem (1675 bytes)
	I1004 04:24:51.580018   66293 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem (1708 bytes)
	I1004 04:24:51.580871   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 04:24:51.630190   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1004 04:24:51.667887   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 04:24:51.715372   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1004 04:24:51.750063   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1004 04:24:51.776606   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 04:24:51.808943   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 04:24:51.839165   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/no-preload-658545/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 04:24:51.867862   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 04:24:51.898026   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/certs/16879.pem --> /usr/share/ca-certificates/16879.pem (1338 bytes)
	I1004 04:24:51.926810   66293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/ssl/certs/168792.pem --> /usr/share/ca-certificates/168792.pem (1708 bytes)
	I1004 04:24:51.955416   66293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 04:24:51.977621   66293 ssh_runner.go:195] Run: openssl version
	I1004 04:24:51.984023   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16879.pem && ln -fs /usr/share/ca-certificates/16879.pem /etc/ssl/certs/16879.pem"
	I1004 04:24:51.997672   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.002969   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:08 /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.003039   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16879.pem
	I1004 04:24:52.009473   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16879.pem /etc/ssl/certs/51391683.0"
	I1004 04:24:52.021001   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168792.pem && ln -fs /usr/share/ca-certificates/168792.pem /etc/ssl/certs/168792.pem"
	I1004 04:24:52.032834   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.037679   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:08 /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.037742   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168792.pem
	I1004 04:24:52.044012   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168792.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 04:24:52.055377   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 04:24:52.066222   66293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.070747   66293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:49 /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.070794   66293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 04:24:52.076922   66293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 04:24:52.087952   66293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 04:24:52.093052   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 04:24:52.099710   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 04:24:52.105841   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 04:24:52.112092   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 04:24:52.118428   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 04:24:52.125380   66293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 04:24:52.132085   66293 kubeadm.go:392] StartCluster: {Name:no-preload-658545 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:no-preload-658545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 04:24:52.132193   66293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 04:24:52.132254   66293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:52.171814   66293 cri.go:89] found id: ""
	I1004 04:24:52.171882   66293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 04:24:52.182484   66293 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 04:24:52.182508   66293 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 04:24:52.182559   66293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 04:24:52.193069   66293 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 04:24:52.194108   66293 kubeconfig.go:125] found "no-preload-658545" server: "https://192.168.72.54:8443"
	I1004 04:24:52.196237   66293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 04:24:52.206551   66293 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I1004 04:24:52.206584   66293 kubeadm.go:1160] stopping kube-system containers ...
	I1004 04:24:52.206598   66293 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 04:24:52.206657   66293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 04:24:52.249698   66293 cri.go:89] found id: ""
	I1004 04:24:52.249762   66293 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 04:24:52.266001   66293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:24:52.276056   66293 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:24:52.276081   66293 kubeadm.go:157] found existing configuration files:
	
	I1004 04:24:52.276128   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:24:52.285610   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:24:52.285677   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:24:52.295177   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:24:52.304309   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:24:52.304362   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:24:52.314126   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:24:52.323562   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:24:52.323618   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:24:52.332906   66293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:24:52.342199   66293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:24:52.342252   66293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:24:52.351661   66293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:24:52.361071   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:52.493171   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:48.874471   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:49.374480   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:49.874689   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.373726   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.874543   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:51.373743   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:51.874513   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:52.374719   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:52.874305   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:53.374419   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:50.049668   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:52.050522   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:55.147282   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:57.648169   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:53.586422   66293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.093219868s)
	I1004 04:24:53.586448   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:53.794085   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:53.872327   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:54.004418   66293 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:24:54.004510   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.505463   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.004602   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.036834   66293 api_server.go:72] duration metric: took 1.032414365s to wait for apiserver process to appear ...
	I1004 04:24:55.036858   66293 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:24:55.036877   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:55.037325   66293 api_server.go:269] stopped: https://192.168.72.54:8443/healthz: Get "https://192.168.72.54:8443/healthz": dial tcp 192.168.72.54:8443: connect: connection refused
	I1004 04:24:55.537513   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:57.951637   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:57.951663   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:57.951676   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.010162   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 04:24:58.010188   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 04:24:58.037484   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.060069   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:58.060161   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:53.874725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.373903   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.874127   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.374051   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:55.874019   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:56.373828   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:56.874027   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:57.373914   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:57.874598   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:58.374106   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:54.550080   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:56.550541   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:59.051837   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:24:58.536932   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:58.541611   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:58.541634   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:59.037723   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:59.057378   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 04:24:59.057411   66293 api_server.go:103] status: https://192.168.72.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 04:24:59.536994   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:24:59.545827   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I1004 04:24:59.554199   66293 api_server.go:141] control plane version: v1.31.1
	I1004 04:24:59.554238   66293 api_server.go:131] duration metric: took 4.517373336s to wait for apiserver health ...
	I1004 04:24:59.554247   66293 cni.go:84] Creating CNI manager for ""
	I1004 04:24:59.554253   66293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:24:59.555912   66293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:24:59.557009   66293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:24:59.590146   66293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:24:59.610903   66293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:24:59.634067   66293 system_pods.go:59] 8 kube-system pods found
	I1004 04:24:59.634109   66293 system_pods.go:61] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 04:24:59.634121   66293 system_pods.go:61] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 04:24:59.634131   66293 system_pods.go:61] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 04:24:59.634143   66293 system_pods.go:61] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 04:24:59.634151   66293 system_pods.go:61] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 04:24:59.634160   66293 system_pods.go:61] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 04:24:59.634168   66293 system_pods.go:61] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:24:59.634181   66293 system_pods.go:61] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 04:24:59.634189   66293 system_pods.go:74] duration metric: took 23.257716ms to wait for pod list to return data ...
	I1004 04:24:59.634198   66293 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:24:59.638128   66293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:24:59.638160   66293 node_conditions.go:123] node cpu capacity is 2
	I1004 04:24:59.638173   66293 node_conditions.go:105] duration metric: took 3.969841ms to run NodePressure ...
	I1004 04:24:59.638191   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 04:24:59.968829   66293 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1004 04:24:59.975495   66293 kubeadm.go:739] kubelet initialised
	I1004 04:24:59.975516   66293 kubeadm.go:740] duration metric: took 6.660196ms waiting for restarted kubelet to initialise ...
	I1004 04:24:59.975522   66293 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:00.084084   66293 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.113474   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.113498   66293 pod_ready.go:82] duration metric: took 29.379607ms for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.113507   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.113513   66293 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.128436   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "etcd-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.128463   66293 pod_ready.go:82] duration metric: took 14.94278ms for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.128475   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "etcd-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.128485   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.140033   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-apiserver-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.140059   66293 pod_ready.go:82] duration metric: took 11.56545ms for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.140068   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-apiserver-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.140077   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.157254   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.157286   66293 pod_ready.go:82] duration metric: took 17.197805ms for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.157298   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.157306   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.415110   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-proxy-dvr6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.415141   66293 pod_ready.go:82] duration metric: took 257.824162ms for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.415151   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-proxy-dvr6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.415157   66293 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:00.815201   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "kube-scheduler-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.815226   66293 pod_ready.go:82] duration metric: took 400.063468ms for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:00.815235   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "kube-scheduler-no-preload-658545" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:00.815241   66293 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:01.214416   66293 pod_ready.go:98] node "no-preload-658545" hosting pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:01.214448   66293 pod_ready.go:82] duration metric: took 399.197779ms for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	E1004 04:25:01.214461   66293 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-658545" hosting pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:01.214468   66293 pod_ready.go:39] duration metric: took 1.238937842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:01.214484   66293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:25:01.227389   66293 ops.go:34] apiserver oom_adj: -16
	I1004 04:25:01.227414   66293 kubeadm.go:597] duration metric: took 9.044898439s to restartPrimaryControlPlane
	I1004 04:25:01.227424   66293 kubeadm.go:394] duration metric: took 9.095346513s to StartCluster
	I1004 04:25:01.227441   66293 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:25:01.227520   66293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:25:01.229057   66293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:25:01.229318   66293 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:25:01.229389   66293 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:25:01.229496   66293 addons.go:69] Setting storage-provisioner=true in profile "no-preload-658545"
	I1004 04:25:01.229505   66293 addons.go:69] Setting default-storageclass=true in profile "no-preload-658545"
	I1004 04:25:01.229512   66293 addons.go:234] Setting addon storage-provisioner=true in "no-preload-658545"
	W1004 04:25:01.229520   66293 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:25:01.229524   66293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-658545"
	I1004 04:25:01.229558   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.229562   66293 config.go:182] Loaded profile config "no-preload-658545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:25:01.229557   66293 addons.go:69] Setting metrics-server=true in profile "no-preload-658545"
	I1004 04:25:01.229607   66293 addons.go:234] Setting addon metrics-server=true in "no-preload-658545"
	W1004 04:25:01.229621   66293 addons.go:243] addon metrics-server should already be in state true
	I1004 04:25:01.229655   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.229968   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.229987   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.229971   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.230013   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.230030   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.230133   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.231051   66293 out.go:177] * Verifying Kubernetes components...
	I1004 04:25:01.232578   66293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:25:01.256283   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38313
	I1004 04:25:01.256939   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.257689   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.257720   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.258124   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.258358   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.262593   66293 addons.go:234] Setting addon default-storageclass=true in "no-preload-658545"
	W1004 04:25:01.262620   66293 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:25:01.262652   66293 host.go:66] Checking if "no-preload-658545" exists ...
	I1004 04:25:01.263036   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.263117   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.274653   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I1004 04:25:01.275130   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.275655   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.275685   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.276062   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.276652   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.276697   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.277272   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I1004 04:25:01.277756   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.278175   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.278191   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.278548   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.279116   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.279163   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.283719   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1004 04:25:01.284316   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.284814   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.284836   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.285180   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.285751   66293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:25:01.285801   66293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:25:01.297682   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I1004 04:25:01.297859   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I1004 04:25:01.298298   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.298418   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.298975   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.298995   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.299058   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.299077   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.299407   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.299470   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.299618   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.299660   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.301552   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.302048   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.303197   66293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1004 04:25:01.303600   66293 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:25:01.304053   66293 main.go:141] libmachine: Using API Version  1
	I1004 04:25:01.304068   66293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:25:01.304124   66293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:25:01.304234   66293 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:25:01.304403   66293 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:25:01.304571   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetState
	I1004 04:25:01.305715   66293 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:25:01.305735   66293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:25:01.305850   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:25:01.305861   66293 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:25:01.305876   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.305752   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.306101   66293 main.go:141] libmachine: (no-preload-658545) Calling .DriverName
	I1004 04:25:01.306321   66293 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:25:01.306334   66293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:25:01.306349   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHHostname
	I1004 04:25:01.310374   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.310752   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.310776   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.310888   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.311057   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.311192   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.311272   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.311338   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.311603   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312049   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.312072   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312175   66293 main.go:141] libmachine: (no-preload-658545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:6c:11", ip: ""} in network mk-no-preload-658545: {Iface:virbr3 ExpiryTime:2024-10-04 05:15:02 +0000 UTC Type:0 Mac:52:54:00:f5:6c:11 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:no-preload-658545 Clientid:01:52:54:00:f5:6c:11}
	I1004 04:25:01.312201   66293 main.go:141] libmachine: (no-preload-658545) DBG | domain no-preload-658545 has defined IP address 192.168.72.54 and MAC address 52:54:00:f5:6c:11 in network mk-no-preload-658545
	I1004 04:25:01.312302   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.312468   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.312497   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHPort
	I1004 04:25:01.312586   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.312658   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHKeyPath
	I1004 04:25:01.312681   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.312811   66293 main.go:141] libmachine: (no-preload-658545) Calling .GetSSHUsername
	I1004 04:25:01.312948   66293 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/no-preload-658545/id_rsa Username:docker}
	I1004 04:25:01.478533   66293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:25:01.511716   66293 node_ready.go:35] waiting up to 6m0s for node "no-preload-658545" to be "Ready" ...
	I1004 04:25:01.557879   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:25:01.574381   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:25:01.601090   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:25:01.601112   66293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:25:01.630465   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:25:01.630495   66293 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:25:01.681089   66293 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:25:01.681118   66293 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:25:01.703024   66293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:25:02.053562   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.053585   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.053855   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.053871   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.053882   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.053891   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.054118   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.054139   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.054128   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.061624   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.061646   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.061949   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.061967   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.061985   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.580950   66293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.00653263s)
	I1004 04:25:02.581002   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.581014   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.581350   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.581368   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.581376   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.581382   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.581459   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.581594   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.581606   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.702713   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.702739   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.703015   66293 main.go:141] libmachine: (no-preload-658545) DBG | Closing plugin on server side
	I1004 04:25:02.703028   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.703090   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.703106   66293 main.go:141] libmachine: Making call to close driver server
	I1004 04:25:02.703117   66293 main.go:141] libmachine: (no-preload-658545) Calling .Close
	I1004 04:25:02.703347   66293 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:25:02.703363   66293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:25:02.703380   66293 addons.go:475] Verifying addon metrics-server=true in "no-preload-658545"
	I1004 04:25:02.705335   66293 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 04:24:59.648241   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:01.649424   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:02.706605   66293 addons.go:510] duration metric: took 1.477226s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 04:24:58.874143   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:59.373810   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:24:59.874682   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:00.374672   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:00.873725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.374175   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.874724   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:02.374725   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:02.874746   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:03.373689   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:01.548783   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:03.549515   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:04.146633   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:06.147540   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.147626   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:03.516566   66293 node_ready.go:53] node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:06.022815   66293 node_ready.go:53] node "no-preload-658545" has status "Ready":"False"
	I1004 04:25:03.874594   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:04.374498   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:04.874377   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:05.374050   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:05.374139   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:05.412153   67282 cri.go:89] found id: ""
	I1004 04:25:05.412185   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.412195   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:05.412202   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:05.412264   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:05.446725   67282 cri.go:89] found id: ""
	I1004 04:25:05.446750   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.446758   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:05.446763   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:05.446816   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:05.487652   67282 cri.go:89] found id: ""
	I1004 04:25:05.487678   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.487686   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:05.487691   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:05.487752   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:05.526275   67282 cri.go:89] found id: ""
	I1004 04:25:05.526302   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.526310   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:05.526319   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:05.526375   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:05.565004   67282 cri.go:89] found id: ""
	I1004 04:25:05.565034   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.565045   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:05.565052   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:05.565101   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:05.601963   67282 cri.go:89] found id: ""
	I1004 04:25:05.601990   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.601998   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:05.602003   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:05.602051   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:05.638621   67282 cri.go:89] found id: ""
	I1004 04:25:05.638651   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.638660   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:05.638666   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:05.638720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:05.678042   67282 cri.go:89] found id: ""
	I1004 04:25:05.678071   67282 logs.go:282] 0 containers: []
	W1004 04:25:05.678082   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:05.678093   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:05.678107   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:05.720677   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:05.720707   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:05.775219   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:05.775252   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:05.789748   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:05.789774   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:05.918752   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:05.918783   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:05.918798   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:08.493206   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:05.551035   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.048870   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:10.148154   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.645708   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:08.516666   66293 node_ready.go:49] node "no-preload-658545" has status "Ready":"True"
	I1004 04:25:08.516690   66293 node_ready.go:38] duration metric: took 7.004939371s for node "no-preload-658545" to be "Ready" ...
	I1004 04:25:08.516699   66293 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:25:08.522101   66293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.527132   66293 pod_ready.go:93] pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:08.527153   66293 pod_ready.go:82] duration metric: took 5.024648ms for pod "coredns-7c65d6cfc9-ppggj" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.527162   66293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.534172   66293 pod_ready.go:93] pod "etcd-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:08.534195   66293 pod_ready.go:82] duration metric: took 7.027189ms for pod "etcd-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.534204   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:10.541186   66293 pod_ready.go:103] pod "kube-apiserver-no-preload-658545" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.040607   66293 pod_ready.go:93] pod "kube-apiserver-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.040640   66293 pod_ready.go:82] duration metric: took 3.506428875s for pod "kube-apiserver-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.040654   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.045845   66293 pod_ready.go:93] pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.045870   66293 pod_ready.go:82] duration metric: took 5.207108ms for pod "kube-controller-manager-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.045883   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.051587   66293 pod_ready.go:93] pod "kube-proxy-dvr6b" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.051604   66293 pod_ready.go:82] duration metric: took 5.715328ms for pod "kube-proxy-dvr6b" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.051613   66293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.116361   66293 pod_ready.go:93] pod "kube-scheduler-no-preload-658545" in "kube-system" namespace has status "Ready":"True"
	I1004 04:25:12.116401   66293 pod_ready.go:82] duration metric: took 64.774234ms for pod "kube-scheduler-no-preload-658545" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:12.116411   66293 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	I1004 04:25:08.506490   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:08.506549   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:08.545875   67282 cri.go:89] found id: ""
	I1004 04:25:08.545909   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.545920   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:08.545933   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:08.545997   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:08.582348   67282 cri.go:89] found id: ""
	I1004 04:25:08.582375   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.582383   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:08.582389   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:08.582438   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:08.637763   67282 cri.go:89] found id: ""
	I1004 04:25:08.637797   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.637809   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:08.637816   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:08.637890   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:08.681171   67282 cri.go:89] found id: ""
	I1004 04:25:08.681205   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.681216   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:08.681224   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:08.681289   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:08.719513   67282 cri.go:89] found id: ""
	I1004 04:25:08.719542   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.719549   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:08.719555   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:08.719607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:08.762152   67282 cri.go:89] found id: ""
	I1004 04:25:08.762175   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.762183   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:08.762188   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:08.762251   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:08.799857   67282 cri.go:89] found id: ""
	I1004 04:25:08.799881   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.799892   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:08.799903   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:08.799954   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:08.835264   67282 cri.go:89] found id: ""
	I1004 04:25:08.835296   67282 logs.go:282] 0 containers: []
	W1004 04:25:08.835308   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:08.835318   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:08.835330   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:08.875501   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:08.875532   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:08.929145   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:08.929178   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:08.942769   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:08.942808   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:09.025372   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:09.025401   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:09.025416   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:11.611179   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:11.625118   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:11.625253   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:11.661512   67282 cri.go:89] found id: ""
	I1004 04:25:11.661540   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.661547   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:11.661553   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:11.661607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:11.704902   67282 cri.go:89] found id: ""
	I1004 04:25:11.704931   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.704941   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:11.704948   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:11.705007   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:11.741747   67282 cri.go:89] found id: ""
	I1004 04:25:11.741770   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.741780   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:11.741787   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:11.741841   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:11.776838   67282 cri.go:89] found id: ""
	I1004 04:25:11.776863   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.776871   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:11.776876   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:11.776927   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:11.812996   67282 cri.go:89] found id: ""
	I1004 04:25:11.813024   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.813033   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:11.813038   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:11.813097   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:11.853718   67282 cri.go:89] found id: ""
	I1004 04:25:11.853744   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.853752   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:11.853758   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:11.853813   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:11.896840   67282 cri.go:89] found id: ""
	I1004 04:25:11.896867   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.896879   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:11.896885   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:11.896943   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:11.932529   67282 cri.go:89] found id: ""
	I1004 04:25:11.932552   67282 logs.go:282] 0 containers: []
	W1004 04:25:11.932561   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:11.932569   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:11.932580   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:11.946504   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:11.946538   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:12.024692   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:12.024713   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:12.024724   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:12.111942   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:12.111976   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:12.156483   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:12.156522   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:10.049912   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:12.051024   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.646058   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:16.647214   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.123343   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:16.622947   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:14.708243   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:14.722943   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:14.723007   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:14.758502   67282 cri.go:89] found id: ""
	I1004 04:25:14.758555   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.758567   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:14.758575   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:14.758633   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:14.796496   67282 cri.go:89] found id: ""
	I1004 04:25:14.796525   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.796532   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:14.796538   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:14.796595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:14.832216   67282 cri.go:89] found id: ""
	I1004 04:25:14.832247   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.832259   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:14.832266   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:14.832330   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:14.868461   67282 cri.go:89] found id: ""
	I1004 04:25:14.868491   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.868501   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:14.868509   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:14.868568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:14.909827   67282 cri.go:89] found id: ""
	I1004 04:25:14.909857   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.909867   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:14.909875   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:14.909949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:14.947809   67282 cri.go:89] found id: ""
	I1004 04:25:14.947839   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.947850   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:14.947857   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:14.947904   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:14.984073   67282 cri.go:89] found id: ""
	I1004 04:25:14.984101   67282 logs.go:282] 0 containers: []
	W1004 04:25:14.984110   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:14.984115   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:14.984170   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:15.021145   67282 cri.go:89] found id: ""
	I1004 04:25:15.021179   67282 logs.go:282] 0 containers: []
	W1004 04:25:15.021191   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:15.021204   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:15.021217   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:15.075295   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:15.075328   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:15.088953   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:15.088980   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:15.175103   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:15.175128   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:15.175143   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:15.259004   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:15.259044   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:17.825029   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:17.839496   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:17.839574   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:17.877643   67282 cri.go:89] found id: ""
	I1004 04:25:17.877673   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.877684   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:17.877692   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:17.877751   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:17.921534   67282 cri.go:89] found id: ""
	I1004 04:25:17.921563   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.921574   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:17.921581   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:17.921634   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:17.961281   67282 cri.go:89] found id: ""
	I1004 04:25:17.961307   67282 logs.go:282] 0 containers: []
	W1004 04:25:17.961315   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:17.961320   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:17.961386   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:18.001036   67282 cri.go:89] found id: ""
	I1004 04:25:18.001066   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.001078   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:18.001085   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:18.001156   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:18.043212   67282 cri.go:89] found id: ""
	I1004 04:25:18.043241   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.043252   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:18.043259   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:18.043319   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:18.082399   67282 cri.go:89] found id: ""
	I1004 04:25:18.082423   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.082430   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:18.082435   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:18.082493   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:18.120507   67282 cri.go:89] found id: ""
	I1004 04:25:18.120534   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.120544   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:18.120550   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:18.120605   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:18.156601   67282 cri.go:89] found id: ""
	I1004 04:25:18.156629   67282 logs.go:282] 0 containers: []
	W1004 04:25:18.156640   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:18.156650   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:18.156663   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:18.198393   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:18.198424   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:18.250992   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:18.251032   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:18.267984   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:18.268015   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:18.343283   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:18.343303   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:18.343314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:14.549511   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:17.048940   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:19.051125   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:18.648462   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:21.146813   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.147244   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:18.624165   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:20.627159   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.123629   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:20.922578   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:20.938037   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:20.938122   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:20.978389   67282 cri.go:89] found id: ""
	I1004 04:25:20.978417   67282 logs.go:282] 0 containers: []
	W1004 04:25:20.978426   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:20.978431   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:20.978478   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:21.033490   67282 cri.go:89] found id: ""
	I1004 04:25:21.033520   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.033528   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:21.033533   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:21.033589   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:21.087168   67282 cri.go:89] found id: ""
	I1004 04:25:21.087198   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.087209   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:21.087216   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:21.087299   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:21.144327   67282 cri.go:89] found id: ""
	I1004 04:25:21.144356   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.144366   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:21.144373   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:21.144431   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:21.183336   67282 cri.go:89] found id: ""
	I1004 04:25:21.183378   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.183390   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:21.183397   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:21.183459   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:21.221847   67282 cri.go:89] found id: ""
	I1004 04:25:21.221878   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.221892   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:21.221901   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:21.221961   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:21.258542   67282 cri.go:89] found id: ""
	I1004 04:25:21.258573   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.258584   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:21.258590   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:21.258652   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:21.303173   67282 cri.go:89] found id: ""
	I1004 04:25:21.303202   67282 logs.go:282] 0 containers: []
	W1004 04:25:21.303211   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:21.303218   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:21.303243   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:21.358109   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:21.358146   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:21.373958   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:21.373987   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:21.450956   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:21.450980   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:21.451006   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:21.534763   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:21.534807   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:21.550109   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:23.550304   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:25.148868   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:27.647698   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:25.622123   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:27.624777   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:24.082856   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:24.098263   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:24.098336   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:24.144969   67282 cri.go:89] found id: ""
	I1004 04:25:24.144999   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.145009   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:24.145015   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:24.145072   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:24.185670   67282 cri.go:89] found id: ""
	I1004 04:25:24.185693   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.185702   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:24.185708   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:24.185769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:24.223657   67282 cri.go:89] found id: ""
	I1004 04:25:24.223691   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.223703   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:24.223710   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:24.223769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:24.261841   67282 cri.go:89] found id: ""
	I1004 04:25:24.261864   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.261872   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:24.261878   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:24.261938   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:24.299734   67282 cri.go:89] found id: ""
	I1004 04:25:24.299758   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.299769   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:24.299775   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:24.299867   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:24.337413   67282 cri.go:89] found id: ""
	I1004 04:25:24.337440   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.337450   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:24.337457   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:24.337523   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:24.375963   67282 cri.go:89] found id: ""
	I1004 04:25:24.375995   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.376007   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:24.376014   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:24.376073   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:24.415978   67282 cri.go:89] found id: ""
	I1004 04:25:24.416010   67282 logs.go:282] 0 containers: []
	W1004 04:25:24.416021   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:24.416030   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:24.416045   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:24.458703   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:24.458738   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:24.510669   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:24.510704   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:24.525646   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:24.525687   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:24.603280   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:24.603310   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:24.603324   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:27.184935   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:27.200241   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:27.200321   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:27.237546   67282 cri.go:89] found id: ""
	I1004 04:25:27.237576   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.237588   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:27.237596   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:27.237653   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:27.272598   67282 cri.go:89] found id: ""
	I1004 04:25:27.272625   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.272634   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:27.272642   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:27.272700   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:27.306659   67282 cri.go:89] found id: ""
	I1004 04:25:27.306693   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.306706   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:27.306715   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:27.306779   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:27.344315   67282 cri.go:89] found id: ""
	I1004 04:25:27.344349   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.344363   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:27.344370   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:27.344428   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:27.380231   67282 cri.go:89] found id: ""
	I1004 04:25:27.380267   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.380278   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:27.380286   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:27.380346   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:27.418137   67282 cri.go:89] found id: ""
	I1004 04:25:27.418161   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.418169   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:27.418174   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:27.418225   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:27.458235   67282 cri.go:89] found id: ""
	I1004 04:25:27.458262   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.458283   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:27.458289   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:27.458342   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:27.495161   67282 cri.go:89] found id: ""
	I1004 04:25:27.495189   67282 logs.go:282] 0 containers: []
	W1004 04:25:27.495198   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:27.495206   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:27.495217   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:27.547749   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:27.547795   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:27.563322   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:27.563355   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:27.636682   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:27.636710   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:27.636725   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:27.711316   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:27.711354   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:26.050001   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:28.548322   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.147210   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:32.647027   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.122267   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:32.122501   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:30.250361   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:30.265789   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:30.265866   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:30.305127   67282 cri.go:89] found id: ""
	I1004 04:25:30.305166   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.305183   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:30.305190   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:30.305258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:30.346529   67282 cri.go:89] found id: ""
	I1004 04:25:30.346560   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.346570   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:30.346577   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:30.346641   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:30.387368   67282 cri.go:89] found id: ""
	I1004 04:25:30.387407   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.387418   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:30.387425   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:30.387489   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:30.428193   67282 cri.go:89] found id: ""
	I1004 04:25:30.428230   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.428242   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:30.428248   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:30.428308   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:30.465484   67282 cri.go:89] found id: ""
	I1004 04:25:30.465509   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.465518   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:30.465523   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:30.465573   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:30.501133   67282 cri.go:89] found id: ""
	I1004 04:25:30.501163   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.501174   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:30.501181   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:30.501248   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:30.536492   67282 cri.go:89] found id: ""
	I1004 04:25:30.536519   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.536530   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:30.536536   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:30.536587   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:30.571721   67282 cri.go:89] found id: ""
	I1004 04:25:30.571745   67282 logs.go:282] 0 containers: []
	W1004 04:25:30.571753   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:30.571761   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:30.571771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:30.626922   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:30.626958   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:30.641817   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:30.641852   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:30.725604   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:30.725633   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:30.725647   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:30.800359   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:30.800393   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:33.340747   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:33.355862   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:33.355936   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:33.397628   67282 cri.go:89] found id: ""
	I1004 04:25:33.397655   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.397662   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:33.397668   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:33.397718   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:33.442100   67282 cri.go:89] found id: ""
	I1004 04:25:33.442128   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.442137   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:33.442142   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:33.442187   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:33.481035   67282 cri.go:89] found id: ""
	I1004 04:25:33.481063   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.481076   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:33.481083   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:33.481149   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:30.549816   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:33.048791   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:35.147125   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:37.647224   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:34.122573   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:36.622639   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:33.516633   67282 cri.go:89] found id: ""
	I1004 04:25:33.516661   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.516669   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:33.516677   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:33.516727   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:33.556569   67282 cri.go:89] found id: ""
	I1004 04:25:33.556600   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.556610   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:33.556617   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:33.556679   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:33.591678   67282 cri.go:89] found id: ""
	I1004 04:25:33.591715   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.591724   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:33.591731   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:33.591786   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:33.626571   67282 cri.go:89] found id: ""
	I1004 04:25:33.626594   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.626602   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:33.626607   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:33.626650   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:33.664336   67282 cri.go:89] found id: ""
	I1004 04:25:33.664359   67282 logs.go:282] 0 containers: []
	W1004 04:25:33.664367   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:33.664375   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:33.664386   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:33.748013   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:33.748047   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:33.786730   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:33.786767   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:33.839355   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:33.839392   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:33.853807   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:33.853835   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:33.920183   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:36.420485   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:36.435150   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:36.435221   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:36.471818   67282 cri.go:89] found id: ""
	I1004 04:25:36.471842   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.471850   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:36.471855   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:36.471908   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:36.511469   67282 cri.go:89] found id: ""
	I1004 04:25:36.511496   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.511504   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:36.511509   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:36.511557   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:36.552607   67282 cri.go:89] found id: ""
	I1004 04:25:36.552633   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.552641   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:36.552646   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:36.552702   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:36.596260   67282 cri.go:89] found id: ""
	I1004 04:25:36.596282   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.596290   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:36.596295   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:36.596340   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:36.636674   67282 cri.go:89] found id: ""
	I1004 04:25:36.636700   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.636708   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:36.636713   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:36.636764   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:36.675155   67282 cri.go:89] found id: ""
	I1004 04:25:36.675194   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.675206   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:36.675214   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:36.675279   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:36.713458   67282 cri.go:89] found id: ""
	I1004 04:25:36.713485   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.713493   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:36.713498   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:36.713552   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:36.754567   67282 cri.go:89] found id: ""
	I1004 04:25:36.754596   67282 logs.go:282] 0 containers: []
	W1004 04:25:36.754607   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:36.754618   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:36.754631   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:36.824413   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:36.824439   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:36.824453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:36.900438   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:36.900471   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:36.942238   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:36.942264   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:36.992527   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:36.992556   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:35.050546   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:37.548965   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:39.647505   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:42.146720   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:38.623559   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:41.121785   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:43.122437   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:39.506599   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:39.520782   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:39.520854   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:39.561853   67282 cri.go:89] found id: ""
	I1004 04:25:39.561880   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.561891   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:39.561898   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:39.561955   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:39.597548   67282 cri.go:89] found id: ""
	I1004 04:25:39.597581   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.597591   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:39.597598   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:39.597659   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:39.634481   67282 cri.go:89] found id: ""
	I1004 04:25:39.634517   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.634525   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:39.634530   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:39.634575   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:39.677077   67282 cri.go:89] found id: ""
	I1004 04:25:39.677107   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.677117   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:39.677124   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:39.677185   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:39.716334   67282 cri.go:89] found id: ""
	I1004 04:25:39.716356   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.716364   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:39.716369   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:39.716416   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:39.754765   67282 cri.go:89] found id: ""
	I1004 04:25:39.754792   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.754803   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:39.754810   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:39.754863   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:39.788782   67282 cri.go:89] found id: ""
	I1004 04:25:39.788811   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.788824   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:39.788832   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:39.788890   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:39.821946   67282 cri.go:89] found id: ""
	I1004 04:25:39.821970   67282 logs.go:282] 0 containers: []
	W1004 04:25:39.821979   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:39.821988   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:39.822001   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:39.892629   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:39.892657   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:39.892674   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:39.973480   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:39.973515   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:40.018175   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:40.018203   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:40.068585   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:40.068620   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:42.583639   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:42.597249   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:42.597333   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:42.631993   67282 cri.go:89] found id: ""
	I1004 04:25:42.632020   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.632030   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:42.632037   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:42.632091   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:42.669708   67282 cri.go:89] found id: ""
	I1004 04:25:42.669739   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.669749   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:42.669762   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:42.669836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:42.705995   67282 cri.go:89] found id: ""
	I1004 04:25:42.706019   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.706030   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:42.706037   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:42.706094   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:42.740436   67282 cri.go:89] found id: ""
	I1004 04:25:42.740458   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.740466   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:42.740472   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:42.740524   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:42.774516   67282 cri.go:89] found id: ""
	I1004 04:25:42.774546   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.774557   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:42.774564   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:42.774614   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:42.807471   67282 cri.go:89] found id: ""
	I1004 04:25:42.807502   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.807510   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:42.807516   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:42.807561   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:42.851943   67282 cri.go:89] found id: ""
	I1004 04:25:42.851968   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.851977   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:42.851983   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:42.852040   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:42.887762   67282 cri.go:89] found id: ""
	I1004 04:25:42.887801   67282 logs.go:282] 0 containers: []
	W1004 04:25:42.887812   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:42.887822   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:42.887834   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:42.960398   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:42.960423   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:42.960440   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:43.040078   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:43.040117   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:43.081614   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:43.081638   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:43.132744   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:43.132781   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:39.551722   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:42.049418   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:44.049835   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:44.646919   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:47.146884   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:45.622878   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.122299   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:45.647332   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:45.660765   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:45.660834   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:45.696351   67282 cri.go:89] found id: ""
	I1004 04:25:45.696379   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.696390   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:45.696397   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:45.696449   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:45.738529   67282 cri.go:89] found id: ""
	I1004 04:25:45.738553   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.738561   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:45.738566   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:45.738621   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:45.773071   67282 cri.go:89] found id: ""
	I1004 04:25:45.773094   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.773103   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:45.773110   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:45.773165   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:45.810813   67282 cri.go:89] found id: ""
	I1004 04:25:45.810840   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.810852   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:45.810859   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:45.810913   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:45.848916   67282 cri.go:89] found id: ""
	I1004 04:25:45.848942   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.848951   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:45.848956   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:45.849014   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:45.886737   67282 cri.go:89] found id: ""
	I1004 04:25:45.886763   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.886772   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:45.886778   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:45.886825   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:45.922263   67282 cri.go:89] found id: ""
	I1004 04:25:45.922291   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.922301   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:45.922307   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:45.922364   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:45.956688   67282 cri.go:89] found id: ""
	I1004 04:25:45.956710   67282 logs.go:282] 0 containers: []
	W1004 04:25:45.956718   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:45.956725   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:45.956737   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:46.007334   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:46.007365   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:46.020892   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:46.020916   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:46.089786   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:46.089809   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:46.089822   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:46.175987   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:46.176017   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:46.549153   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.549893   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:49.147322   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:51.647365   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:50.622540   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:52.623714   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:48.718354   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:48.733291   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:48.733347   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:48.769149   67282 cri.go:89] found id: ""
	I1004 04:25:48.769175   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.769185   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:48.769193   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:48.769249   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:48.804386   67282 cri.go:89] found id: ""
	I1004 04:25:48.804410   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.804418   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:48.804423   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:48.804467   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:48.841747   67282 cri.go:89] found id: ""
	I1004 04:25:48.841774   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.841782   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:48.841788   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:48.841836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:48.880025   67282 cri.go:89] found id: ""
	I1004 04:25:48.880048   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.880058   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:48.880064   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:48.880121   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:48.916506   67282 cri.go:89] found id: ""
	I1004 04:25:48.916530   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.916540   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:48.916547   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:48.916607   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:48.952082   67282 cri.go:89] found id: ""
	I1004 04:25:48.952105   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.952116   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:48.952122   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:48.952177   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:48.986097   67282 cri.go:89] found id: ""
	I1004 04:25:48.986124   67282 logs.go:282] 0 containers: []
	W1004 04:25:48.986135   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:48.986143   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:48.986210   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:49.020400   67282 cri.go:89] found id: ""
	I1004 04:25:49.020428   67282 logs.go:282] 0 containers: []
	W1004 04:25:49.020436   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:49.020445   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:49.020462   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:49.074724   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:49.074754   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:49.088504   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:49.088529   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:49.165940   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:49.165961   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:49.165972   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:49.244482   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:49.244519   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:51.786086   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:51.800644   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:51.800720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:51.839951   67282 cri.go:89] found id: ""
	I1004 04:25:51.839980   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.839990   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:51.839997   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:51.840055   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:51.878660   67282 cri.go:89] found id: ""
	I1004 04:25:51.878684   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.878695   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:51.878701   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:51.878762   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:51.916640   67282 cri.go:89] found id: ""
	I1004 04:25:51.916665   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.916672   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:51.916678   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:51.916725   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:51.953800   67282 cri.go:89] found id: ""
	I1004 04:25:51.953827   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.953835   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:51.953840   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:51.953897   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:51.993107   67282 cri.go:89] found id: ""
	I1004 04:25:51.993139   67282 logs.go:282] 0 containers: []
	W1004 04:25:51.993150   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:51.993157   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:51.993214   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:52.027426   67282 cri.go:89] found id: ""
	I1004 04:25:52.027454   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.027464   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:52.027470   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:52.027521   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:52.063608   67282 cri.go:89] found id: ""
	I1004 04:25:52.063638   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.063650   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:52.063657   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:52.063717   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:52.100052   67282 cri.go:89] found id: ""
	I1004 04:25:52.100083   67282 logs.go:282] 0 containers: []
	W1004 04:25:52.100094   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:52.100106   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:52.100125   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:52.113801   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:52.113827   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:52.201284   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:52.201311   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:52.201322   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:52.280014   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:52.280047   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:52.318120   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:52.318145   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:51.048719   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:53.050304   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:54.146697   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:56.147015   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:58.148736   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:55.122546   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:57.123051   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:54.872245   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:54.886914   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:54.886990   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:54.927117   67282 cri.go:89] found id: ""
	I1004 04:25:54.927144   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.927152   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:54.927157   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:54.927205   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:54.962510   67282 cri.go:89] found id: ""
	I1004 04:25:54.962540   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.962552   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:54.962559   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:54.962619   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:54.996812   67282 cri.go:89] found id: ""
	I1004 04:25:54.996839   67282 logs.go:282] 0 containers: []
	W1004 04:25:54.996848   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:54.996854   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:54.996905   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:55.034557   67282 cri.go:89] found id: ""
	I1004 04:25:55.034587   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.034597   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:55.034605   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:55.034667   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:55.072383   67282 cri.go:89] found id: ""
	I1004 04:25:55.072416   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.072427   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:55.072434   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:55.072494   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:55.121561   67282 cri.go:89] found id: ""
	I1004 04:25:55.121588   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.121598   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:55.121604   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:55.121775   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:55.165525   67282 cri.go:89] found id: ""
	I1004 04:25:55.165553   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.165564   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:55.165570   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:55.165627   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:55.201808   67282 cri.go:89] found id: ""
	I1004 04:25:55.201836   67282 logs.go:282] 0 containers: []
	W1004 04:25:55.201846   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:55.201857   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:55.201870   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:55.280889   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:55.280917   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:55.280932   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:55.354979   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:55.355012   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:55.397144   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:55.397174   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:55.448710   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:55.448746   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:57.963840   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:25:57.977027   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:25:57.977085   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:25:58.019244   67282 cri.go:89] found id: ""
	I1004 04:25:58.019273   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.019285   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:25:58.019293   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:25:58.019351   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:25:58.057979   67282 cri.go:89] found id: ""
	I1004 04:25:58.058008   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.058018   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:25:58.058027   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:25:58.058084   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:25:58.094607   67282 cri.go:89] found id: ""
	I1004 04:25:58.094639   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.094652   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:25:58.094658   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:25:58.094726   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:25:58.130150   67282 cri.go:89] found id: ""
	I1004 04:25:58.130177   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.130188   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:25:58.130196   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:25:58.130259   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:25:58.167662   67282 cri.go:89] found id: ""
	I1004 04:25:58.167691   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.167701   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:25:58.167709   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:25:58.167769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:25:58.203480   67282 cri.go:89] found id: ""
	I1004 04:25:58.203568   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.203585   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:25:58.203594   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:25:58.203662   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:25:58.239516   67282 cri.go:89] found id: ""
	I1004 04:25:58.239537   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.239545   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:25:58.239551   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:25:58.239595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:25:58.275525   67282 cri.go:89] found id: ""
	I1004 04:25:58.275553   67282 logs.go:282] 0 containers: []
	W1004 04:25:58.275564   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:25:58.275574   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:25:58.275587   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:25:58.331191   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:25:58.331224   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:25:58.345629   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:25:58.345659   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:25:58.416297   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:25:58.416315   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:25:58.416326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:25:58.490659   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:25:58.490694   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:55.548913   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:57.549457   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:00.647858   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.146570   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:25:59.623396   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.624074   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.030058   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:01.044568   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:01.044659   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:01.082652   67282 cri.go:89] found id: ""
	I1004 04:26:01.082679   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.082688   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:01.082694   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:01.082750   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:01.120781   67282 cri.go:89] found id: ""
	I1004 04:26:01.120805   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.120814   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:01.120821   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:01.120878   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:01.159494   67282 cri.go:89] found id: ""
	I1004 04:26:01.159523   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.159531   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:01.159537   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:01.159584   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:01.195482   67282 cri.go:89] found id: ""
	I1004 04:26:01.195512   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.195521   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:01.195529   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:01.195589   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:01.233971   67282 cri.go:89] found id: ""
	I1004 04:26:01.233996   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.234006   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:01.234014   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:01.234076   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:01.275935   67282 cri.go:89] found id: ""
	I1004 04:26:01.275958   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.275966   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:01.275971   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:01.276018   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:01.315512   67282 cri.go:89] found id: ""
	I1004 04:26:01.315535   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.315543   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:01.315548   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:01.315603   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:01.356465   67282 cri.go:89] found id: ""
	I1004 04:26:01.356491   67282 logs.go:282] 0 containers: []
	W1004 04:26:01.356505   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:01.356513   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:01.356523   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:01.409237   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:01.409280   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:01.423426   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:01.423453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:01.501372   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:01.501397   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:01.501413   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:01.591087   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:01.591131   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:25:59.549485   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:01.550138   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.550258   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:05.646818   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:07.647322   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:03.634636   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:06.122840   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:04.152506   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:04.166847   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:04.166911   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:04.203138   67282 cri.go:89] found id: ""
	I1004 04:26:04.203167   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.203177   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:04.203184   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:04.203243   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:04.237427   67282 cri.go:89] found id: ""
	I1004 04:26:04.237453   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.237464   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:04.237471   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:04.237525   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:04.272468   67282 cri.go:89] found id: ""
	I1004 04:26:04.272499   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.272511   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:04.272518   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:04.272584   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:04.307347   67282 cri.go:89] found id: ""
	I1004 04:26:04.307373   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.307384   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:04.307390   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:04.307448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:04.342450   67282 cri.go:89] found id: ""
	I1004 04:26:04.342487   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.342498   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:04.342506   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:04.342568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:04.382846   67282 cri.go:89] found id: ""
	I1004 04:26:04.382874   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.382885   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:04.382893   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:04.382945   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:04.418234   67282 cri.go:89] found id: ""
	I1004 04:26:04.418260   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.418268   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:04.418273   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:04.418328   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:04.453433   67282 cri.go:89] found id: ""
	I1004 04:26:04.453456   67282 logs.go:282] 0 containers: []
	W1004 04:26:04.453464   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:04.453473   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:04.453487   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:04.502093   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:04.502123   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:04.515865   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:04.515897   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:04.595672   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:04.595698   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:04.595713   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:04.675273   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:04.675304   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:07.214965   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:07.229495   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:07.229568   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:07.268541   67282 cri.go:89] found id: ""
	I1004 04:26:07.268580   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.268591   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:07.268599   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:07.268662   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:07.321382   67282 cri.go:89] found id: ""
	I1004 04:26:07.321414   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.321424   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:07.321431   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:07.321490   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:07.379840   67282 cri.go:89] found id: ""
	I1004 04:26:07.379869   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.379878   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:07.379884   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:07.379928   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:07.431304   67282 cri.go:89] found id: ""
	I1004 04:26:07.431333   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.431343   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:07.431349   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:07.431407   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:07.466853   67282 cri.go:89] found id: ""
	I1004 04:26:07.466880   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.466888   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:07.466893   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:07.466951   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:07.501587   67282 cri.go:89] found id: ""
	I1004 04:26:07.501613   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.501624   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:07.501630   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:07.501685   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:07.536326   67282 cri.go:89] found id: ""
	I1004 04:26:07.536354   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.536364   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:07.536371   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:07.536426   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:07.575257   67282 cri.go:89] found id: ""
	I1004 04:26:07.575283   67282 logs.go:282] 0 containers: []
	W1004 04:26:07.575292   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:07.575299   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:07.575310   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:07.629477   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:07.629515   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:07.643294   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:07.643326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:07.720324   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:07.720350   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:07.720365   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:07.797641   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:07.797678   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:06.049580   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:08.548786   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.146544   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:12.146842   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:08.622497   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.622759   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:12.624285   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:10.339392   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:10.353341   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:10.353397   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:10.391023   67282 cri.go:89] found id: ""
	I1004 04:26:10.391049   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.391059   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:10.391066   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:10.391129   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:10.424345   67282 cri.go:89] found id: ""
	I1004 04:26:10.424376   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.424388   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:10.424396   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:10.424466   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:10.459344   67282 cri.go:89] found id: ""
	I1004 04:26:10.459374   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.459387   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:10.459394   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:10.459451   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:10.494898   67282 cri.go:89] found id: ""
	I1004 04:26:10.494921   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.494929   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:10.494935   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:10.494982   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:10.531084   67282 cri.go:89] found id: ""
	I1004 04:26:10.531111   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.531122   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:10.531129   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:10.531185   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:10.566918   67282 cri.go:89] found id: ""
	I1004 04:26:10.566949   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.566960   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:10.566967   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:10.567024   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:10.604888   67282 cri.go:89] found id: ""
	I1004 04:26:10.604923   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.604935   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:10.604942   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:10.605013   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:10.641578   67282 cri.go:89] found id: ""
	I1004 04:26:10.641606   67282 logs.go:282] 0 containers: []
	W1004 04:26:10.641620   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:10.641631   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:10.641648   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:10.696848   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:10.696882   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:10.710393   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:10.710417   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:10.780854   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:10.780881   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:10.780895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:10.861732   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:10.861771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:13.403231   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:13.417246   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:13.417319   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:13.451581   67282 cri.go:89] found id: ""
	I1004 04:26:13.451607   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.451616   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:13.451621   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:13.451681   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:13.488362   67282 cri.go:89] found id: ""
	I1004 04:26:13.488388   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.488396   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:13.488401   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:13.488449   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:10.549905   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:13.048997   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:14.646627   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:16.647879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:15.123067   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:17.622729   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:13.522697   67282 cri.go:89] found id: ""
	I1004 04:26:13.522729   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.522740   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:13.522751   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:13.522803   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:13.564926   67282 cri.go:89] found id: ""
	I1004 04:26:13.564959   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.564972   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:13.564981   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:13.565058   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:13.600582   67282 cri.go:89] found id: ""
	I1004 04:26:13.600612   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.600622   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:13.600630   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:13.600688   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:13.634550   67282 cri.go:89] found id: ""
	I1004 04:26:13.634575   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.634584   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:13.634591   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:13.634646   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:13.669281   67282 cri.go:89] found id: ""
	I1004 04:26:13.669311   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.669320   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:13.669326   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:13.669388   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:13.707664   67282 cri.go:89] found id: ""
	I1004 04:26:13.707693   67282 logs.go:282] 0 containers: []
	W1004 04:26:13.707703   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:13.707713   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:13.707727   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:13.721127   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:13.721168   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:13.788026   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:13.788051   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:13.788067   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:13.864505   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:13.864542   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:13.902896   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:13.902921   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:16.456813   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:16.470071   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:16.470138   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:16.506085   67282 cri.go:89] found id: ""
	I1004 04:26:16.506114   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.506125   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:16.506133   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:16.506189   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:16.540016   67282 cri.go:89] found id: ""
	I1004 04:26:16.540044   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.540052   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:16.540056   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:16.540100   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:16.579247   67282 cri.go:89] found id: ""
	I1004 04:26:16.579272   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.579280   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:16.579285   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:16.579332   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:16.615552   67282 cri.go:89] found id: ""
	I1004 04:26:16.615579   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.615601   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:16.615621   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:16.615675   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:16.652639   67282 cri.go:89] found id: ""
	I1004 04:26:16.652660   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.652671   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:16.652678   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:16.652732   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:16.689607   67282 cri.go:89] found id: ""
	I1004 04:26:16.689631   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.689643   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:16.689650   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:16.689720   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:16.724430   67282 cri.go:89] found id: ""
	I1004 04:26:16.724458   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.724469   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:16.724475   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:16.724534   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:16.758378   67282 cri.go:89] found id: ""
	I1004 04:26:16.758412   67282 logs.go:282] 0 containers: []
	W1004 04:26:16.758423   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:16.758434   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:16.758454   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:16.826234   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:16.826259   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:16.826273   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:16.906908   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:16.906945   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:16.950295   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:16.950321   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:17.002216   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:17.002253   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:15.549441   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:17.549816   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.147105   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:21.147403   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.622982   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:21.624073   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:19.516253   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:19.529664   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:19.529726   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:19.566669   67282 cri.go:89] found id: ""
	I1004 04:26:19.566700   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.566711   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:19.566718   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:19.566772   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:19.605923   67282 cri.go:89] found id: ""
	I1004 04:26:19.605951   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.605961   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:19.605968   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:19.606025   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:19.645132   67282 cri.go:89] found id: ""
	I1004 04:26:19.645158   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.645168   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:19.645175   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:19.645235   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:19.687135   67282 cri.go:89] found id: ""
	I1004 04:26:19.687160   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.687171   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:19.687178   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:19.687256   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:19.724180   67282 cri.go:89] found id: ""
	I1004 04:26:19.724213   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.724224   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:19.724230   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:19.724295   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:19.761608   67282 cri.go:89] found id: ""
	I1004 04:26:19.761638   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.761649   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:19.761656   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:19.761714   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:19.795060   67282 cri.go:89] found id: ""
	I1004 04:26:19.795089   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.795099   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:19.795106   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:19.795164   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:19.835678   67282 cri.go:89] found id: ""
	I1004 04:26:19.835703   67282 logs.go:282] 0 containers: []
	W1004 04:26:19.835712   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:19.835722   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:19.835736   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:19.889508   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:19.889543   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:19.903206   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:19.903233   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:19.973445   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:19.973471   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:19.973485   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:20.053996   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:20.054034   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:22.594171   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:22.609084   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:22.609145   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:22.650423   67282 cri.go:89] found id: ""
	I1004 04:26:22.650449   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.650459   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:22.650466   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:22.650525   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:22.686420   67282 cri.go:89] found id: ""
	I1004 04:26:22.686450   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.686461   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:22.686469   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:22.686535   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:22.721385   67282 cri.go:89] found id: ""
	I1004 04:26:22.721408   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.721416   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:22.721421   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:22.721484   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:22.765461   67282 cri.go:89] found id: ""
	I1004 04:26:22.765492   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.765504   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:22.765511   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:22.765569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:22.798192   67282 cri.go:89] found id: ""
	I1004 04:26:22.798220   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.798230   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:22.798235   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:22.798293   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:22.833110   67282 cri.go:89] found id: ""
	I1004 04:26:22.833138   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.833147   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:22.833153   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:22.833212   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:22.875653   67282 cri.go:89] found id: ""
	I1004 04:26:22.875684   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.875696   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:22.875704   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:22.875766   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:22.913906   67282 cri.go:89] found id: ""
	I1004 04:26:22.913931   67282 logs.go:282] 0 containers: []
	W1004 04:26:22.913938   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:22.913946   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:22.913957   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:22.969480   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:22.969511   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:22.983475   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:22.983500   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:23.059953   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:23.059982   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:23.059996   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:23.139106   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:23.139134   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:19.550307   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:22.048618   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:23.647507   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:26.147135   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:24.122370   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:26.122976   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:25.678489   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:25.692648   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:25.692705   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:25.728232   67282 cri.go:89] found id: ""
	I1004 04:26:25.728261   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.728269   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:25.728276   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:25.728335   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:25.763956   67282 cri.go:89] found id: ""
	I1004 04:26:25.763982   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.763991   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:25.763998   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:25.764057   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:25.799715   67282 cri.go:89] found id: ""
	I1004 04:26:25.799743   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.799753   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:25.799761   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:25.799840   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:25.834823   67282 cri.go:89] found id: ""
	I1004 04:26:25.834855   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.834866   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:25.834873   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:25.834933   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:25.869194   67282 cri.go:89] found id: ""
	I1004 04:26:25.869224   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.869235   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:25.869242   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:25.869303   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:25.903514   67282 cri.go:89] found id: ""
	I1004 04:26:25.903543   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.903553   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:25.903558   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:25.903606   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:25.939887   67282 cri.go:89] found id: ""
	I1004 04:26:25.939919   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.939930   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:25.939938   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:25.939996   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:25.981922   67282 cri.go:89] found id: ""
	I1004 04:26:25.981944   67282 logs.go:282] 0 containers: []
	W1004 04:26:25.981952   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:25.981960   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:25.981971   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:26.064860   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:26.064891   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:26.105272   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:26.105296   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:26.162602   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:26.162640   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:26.176408   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:26.176439   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:26.242264   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:24.549364   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:27.049470   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.646788   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.146205   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:33.146879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.622691   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.122181   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:33.123226   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:28.742417   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:28.755655   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:28.755723   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:28.789338   67282 cri.go:89] found id: ""
	I1004 04:26:28.789361   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.789369   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:28.789374   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:28.789420   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:28.823513   67282 cri.go:89] found id: ""
	I1004 04:26:28.823544   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.823555   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:28.823562   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:28.823619   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:28.858826   67282 cri.go:89] found id: ""
	I1004 04:26:28.858854   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.858866   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:28.858873   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:28.858927   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:28.892552   67282 cri.go:89] found id: ""
	I1004 04:26:28.892579   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.892587   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:28.892593   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:28.892639   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:28.929250   67282 cri.go:89] found id: ""
	I1004 04:26:28.929277   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.929284   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:28.929289   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:28.929335   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:28.966554   67282 cri.go:89] found id: ""
	I1004 04:26:28.966581   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.966589   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:28.966594   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:28.966642   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:28.999930   67282 cri.go:89] found id: ""
	I1004 04:26:28.999954   67282 logs.go:282] 0 containers: []
	W1004 04:26:28.999964   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:28.999970   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:29.000025   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:29.033687   67282 cri.go:89] found id: ""
	I1004 04:26:29.033717   67282 logs.go:282] 0 containers: []
	W1004 04:26:29.033727   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:29.033737   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:29.033752   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:29.109486   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:29.109523   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:29.149125   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:29.149152   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:29.197830   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:29.197861   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:29.211182   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:29.211204   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:29.276808   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:31.777659   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:31.791374   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:31.791425   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:31.825453   67282 cri.go:89] found id: ""
	I1004 04:26:31.825480   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.825489   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:31.825495   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:31.825553   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:31.857845   67282 cri.go:89] found id: ""
	I1004 04:26:31.857875   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.857884   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:31.857893   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:31.857949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:31.892282   67282 cri.go:89] found id: ""
	I1004 04:26:31.892309   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.892317   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:31.892322   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:31.892366   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:31.926016   67282 cri.go:89] found id: ""
	I1004 04:26:31.926037   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.926045   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:31.926051   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:31.926094   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:31.961382   67282 cri.go:89] found id: ""
	I1004 04:26:31.961415   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.961425   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:31.961433   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:31.961492   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:31.994570   67282 cri.go:89] found id: ""
	I1004 04:26:31.994602   67282 logs.go:282] 0 containers: []
	W1004 04:26:31.994613   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:31.994620   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:31.994675   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:32.027359   67282 cri.go:89] found id: ""
	I1004 04:26:32.027383   67282 logs.go:282] 0 containers: []
	W1004 04:26:32.027391   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:32.027397   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:32.027448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:32.063518   67282 cri.go:89] found id: ""
	I1004 04:26:32.063545   67282 logs.go:282] 0 containers: []
	W1004 04:26:32.063555   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:32.063565   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:32.063577   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:32.151555   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:32.151582   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:32.190678   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:32.190700   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:32.243567   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:32.243596   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:32.256293   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:32.256320   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:32.329513   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:29.548687   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:31.550184   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:34.050659   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:35.147870   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:37.646571   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:35.623302   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:38.122555   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:34.830126   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:34.844760   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:34.844833   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:34.878409   67282 cri.go:89] found id: ""
	I1004 04:26:34.878433   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.878440   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:34.878445   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:34.878500   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:34.916493   67282 cri.go:89] found id: ""
	I1004 04:26:34.916516   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.916524   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:34.916532   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:34.916577   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:34.954532   67282 cri.go:89] found id: ""
	I1004 04:26:34.954556   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.954565   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:34.954570   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:34.954616   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:34.987163   67282 cri.go:89] found id: ""
	I1004 04:26:34.987190   67282 logs.go:282] 0 containers: []
	W1004 04:26:34.987198   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:34.987205   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:34.987261   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:35.021351   67282 cri.go:89] found id: ""
	I1004 04:26:35.021379   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.021388   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:35.021394   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:35.021452   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:35.056350   67282 cri.go:89] found id: ""
	I1004 04:26:35.056376   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.056384   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:35.056390   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:35.056448   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:35.093375   67282 cri.go:89] found id: ""
	I1004 04:26:35.093402   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.093412   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:35.093420   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:35.093486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:35.130509   67282 cri.go:89] found id: ""
	I1004 04:26:35.130532   67282 logs.go:282] 0 containers: []
	W1004 04:26:35.130541   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:35.130549   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:35.130562   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:35.188138   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:35.188174   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:35.202226   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:35.202267   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:35.276652   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:35.276675   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:35.276688   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:35.357339   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:35.357373   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:37.898166   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:37.911319   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:37.911387   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:37.944551   67282 cri.go:89] found id: ""
	I1004 04:26:37.944578   67282 logs.go:282] 0 containers: []
	W1004 04:26:37.944590   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:37.944597   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:37.944652   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:37.978066   67282 cri.go:89] found id: ""
	I1004 04:26:37.978093   67282 logs.go:282] 0 containers: []
	W1004 04:26:37.978101   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:37.978107   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:37.978163   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:38.011065   67282 cri.go:89] found id: ""
	I1004 04:26:38.011095   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.011104   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:38.011109   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:38.011156   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:38.050323   67282 cri.go:89] found id: ""
	I1004 04:26:38.050349   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.050359   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:38.050366   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:38.050425   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:38.089141   67282 cri.go:89] found id: ""
	I1004 04:26:38.089169   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.089177   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:38.089182   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:38.089258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:38.122625   67282 cri.go:89] found id: ""
	I1004 04:26:38.122653   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.122663   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:38.122671   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:38.122719   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:38.159957   67282 cri.go:89] found id: ""
	I1004 04:26:38.159982   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.159990   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:38.159996   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:38.160085   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:38.194592   67282 cri.go:89] found id: ""
	I1004 04:26:38.194618   67282 logs.go:282] 0 containers: []
	W1004 04:26:38.194626   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:38.194646   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:38.194657   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:38.263914   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:38.263945   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:38.263958   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:38.339864   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:38.339895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:38.375477   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:38.375505   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:38.428292   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:38.428320   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:36.050815   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:38.548602   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:39.646794   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.146914   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:40.123280   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.623659   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:40.941910   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:40.955041   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:40.955117   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:40.991278   67282 cri.go:89] found id: ""
	I1004 04:26:40.991307   67282 logs.go:282] 0 containers: []
	W1004 04:26:40.991317   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:40.991325   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:40.991389   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:41.025347   67282 cri.go:89] found id: ""
	I1004 04:26:41.025373   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.025385   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:41.025392   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:41.025450   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:41.060974   67282 cri.go:89] found id: ""
	I1004 04:26:41.061001   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.061019   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:41.061026   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:41.061087   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:41.097557   67282 cri.go:89] found id: ""
	I1004 04:26:41.097587   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.097598   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:41.097605   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:41.097665   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:41.136371   67282 cri.go:89] found id: ""
	I1004 04:26:41.136396   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.136405   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:41.136412   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:41.136472   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:41.172590   67282 cri.go:89] found id: ""
	I1004 04:26:41.172617   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.172627   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:41.172634   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:41.172687   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:41.209124   67282 cri.go:89] found id: ""
	I1004 04:26:41.209146   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.209154   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:41.209159   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:41.209214   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:41.250654   67282 cri.go:89] found id: ""
	I1004 04:26:41.250687   67282 logs.go:282] 0 containers: []
	W1004 04:26:41.250699   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:41.250709   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:41.250723   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:41.305814   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:41.305864   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:41.322961   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:41.322989   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:41.427611   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:41.427632   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:41.427648   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:41.505830   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:41.505877   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:40.549691   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:42.549838   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:44.647149   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.146894   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:45.122344   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.122706   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:44.050902   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:44.065277   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:44.065343   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:44.101089   67282 cri.go:89] found id: ""
	I1004 04:26:44.101110   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.101117   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:44.101123   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:44.101174   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:44.138570   67282 cri.go:89] found id: ""
	I1004 04:26:44.138593   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.138601   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:44.138606   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:44.138650   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:44.178423   67282 cri.go:89] found id: ""
	I1004 04:26:44.178456   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.178478   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:44.178486   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:44.178556   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:44.213301   67282 cri.go:89] found id: ""
	I1004 04:26:44.213330   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.213338   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:44.213344   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:44.213401   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:44.247653   67282 cri.go:89] found id: ""
	I1004 04:26:44.247681   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.247688   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:44.247694   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:44.247756   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:44.281667   67282 cri.go:89] found id: ""
	I1004 04:26:44.281693   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.281704   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:44.281711   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:44.281767   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:44.314637   67282 cri.go:89] found id: ""
	I1004 04:26:44.314667   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.314677   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:44.314684   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:44.314760   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:44.349432   67282 cri.go:89] found id: ""
	I1004 04:26:44.349459   67282 logs.go:282] 0 containers: []
	W1004 04:26:44.349469   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:44.349479   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:44.349492   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:44.397134   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:44.397168   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:44.410708   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:44.410738   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:44.482025   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:44.482049   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:44.482065   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:44.562652   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:44.562699   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:47.101459   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:47.116923   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:47.117020   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:47.153495   67282 cri.go:89] found id: ""
	I1004 04:26:47.153524   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.153534   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:47.153541   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:47.153601   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:47.189976   67282 cri.go:89] found id: ""
	I1004 04:26:47.190004   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.190014   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:47.190023   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:47.190084   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:47.225712   67282 cri.go:89] found id: ""
	I1004 04:26:47.225740   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.225748   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:47.225754   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:47.225800   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:47.261565   67282 cri.go:89] found id: ""
	I1004 04:26:47.261593   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.261603   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:47.261608   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:47.261665   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:47.298152   67282 cri.go:89] found id: ""
	I1004 04:26:47.298204   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.298214   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:47.298223   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:47.298279   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:47.338226   67282 cri.go:89] found id: ""
	I1004 04:26:47.338253   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.338261   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:47.338267   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:47.338320   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:47.378859   67282 cri.go:89] found id: ""
	I1004 04:26:47.378892   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.378902   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:47.378909   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:47.378964   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:47.418161   67282 cri.go:89] found id: ""
	I1004 04:26:47.418186   67282 logs.go:282] 0 containers: []
	W1004 04:26:47.418194   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:47.418203   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:47.418213   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:47.470271   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:47.470311   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:47.484416   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:47.484453   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:47.556744   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:47.556767   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:47.556778   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:47.634266   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:47.634299   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:45.050501   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:47.550072   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:49.147562   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:51.648504   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:49.623375   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:52.122346   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:50.175746   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:50.191850   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:50.191945   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:50.229542   67282 cri.go:89] found id: ""
	I1004 04:26:50.229574   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.229584   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:50.229593   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:50.229655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:50.268401   67282 cri.go:89] found id: ""
	I1004 04:26:50.268432   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.268441   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:50.268449   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:50.268522   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:50.302927   67282 cri.go:89] found id: ""
	I1004 04:26:50.302954   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.302964   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:50.302969   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:50.303029   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:50.336617   67282 cri.go:89] found id: ""
	I1004 04:26:50.336646   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.336656   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:50.336663   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:50.336724   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:50.372871   67282 cri.go:89] found id: ""
	I1004 04:26:50.372901   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.372911   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:50.372918   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:50.372977   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:50.409601   67282 cri.go:89] found id: ""
	I1004 04:26:50.409629   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.409640   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:50.409648   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:50.409723   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:50.451899   67282 cri.go:89] found id: ""
	I1004 04:26:50.451927   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.451935   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:50.451940   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:50.451991   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:50.487306   67282 cri.go:89] found id: ""
	I1004 04:26:50.487332   67282 logs.go:282] 0 containers: []
	W1004 04:26:50.487343   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:50.487353   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:50.487369   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:50.565167   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:50.565192   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:50.565207   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:50.646155   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:50.646194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:50.688459   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:50.688489   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:50.742416   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:50.742460   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:53.257063   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:53.270546   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:53.270618   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:53.306504   67282 cri.go:89] found id: ""
	I1004 04:26:53.306530   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.306538   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:53.306544   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:53.306594   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:53.343256   67282 cri.go:89] found id: ""
	I1004 04:26:53.343285   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.343293   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:53.343299   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:53.343352   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:53.380834   67282 cri.go:89] found id: ""
	I1004 04:26:53.380864   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.380873   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:53.380880   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:53.380940   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:53.417361   67282 cri.go:89] found id: ""
	I1004 04:26:53.417391   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.417404   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:53.417415   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:53.417479   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:53.451948   67282 cri.go:89] found id: ""
	I1004 04:26:53.451970   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.451978   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:53.451983   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:53.452039   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:53.487731   67282 cri.go:89] found id: ""
	I1004 04:26:53.487756   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.487764   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:53.487769   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:53.487836   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:50.049952   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:52.050275   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:54.151420   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.647593   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:54.122386   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.623398   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:53.531549   67282 cri.go:89] found id: ""
	I1004 04:26:53.531573   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.531582   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:53.531587   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:53.531643   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:53.578123   67282 cri.go:89] found id: ""
	I1004 04:26:53.578151   67282 logs.go:282] 0 containers: []
	W1004 04:26:53.578162   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:53.578180   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:53.578195   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:53.643062   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:53.643093   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:53.696157   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:53.696194   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:53.709884   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:53.709910   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:53.791272   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:53.791297   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:53.791314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:56.371608   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:56.386293   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:56.386376   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:56.425531   67282 cri.go:89] found id: ""
	I1004 04:26:56.425560   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.425571   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:56.425578   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:56.425646   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:56.470293   67282 cri.go:89] found id: ""
	I1004 04:26:56.470326   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.470335   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:56.470340   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:56.470400   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:56.508927   67282 cri.go:89] found id: ""
	I1004 04:26:56.508955   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.508963   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:56.508968   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:56.509018   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:56.549149   67282 cri.go:89] found id: ""
	I1004 04:26:56.549178   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.549191   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:56.549199   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:56.549270   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:56.589412   67282 cri.go:89] found id: ""
	I1004 04:26:56.589441   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.589451   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:56.589459   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:56.589517   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:56.624732   67282 cri.go:89] found id: ""
	I1004 04:26:56.624760   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.624770   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:56.624776   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:56.624838   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:56.662385   67282 cri.go:89] found id: ""
	I1004 04:26:56.662413   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.662421   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:56.662427   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:56.662483   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:56.697982   67282 cri.go:89] found id: ""
	I1004 04:26:56.698014   67282 logs.go:282] 0 containers: []
	W1004 04:26:56.698025   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:56.698036   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:56.698049   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:56.750597   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:56.750633   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:56.764884   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:56.764921   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:26:56.844404   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:26:56.844433   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:26:56.844451   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:26:56.924373   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:56.924406   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:54.548706   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:56.549763   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.049294   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:58.648470   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:01.146948   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.148357   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.123321   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:01.622391   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:26:59.466449   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:26:59.481897   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:26:59.481972   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:26:59.535384   67282 cri.go:89] found id: ""
	I1004 04:26:59.535411   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.535422   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:26:59.535428   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:26:59.535486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:26:59.595843   67282 cri.go:89] found id: ""
	I1004 04:26:59.595875   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.595886   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:26:59.595894   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:26:59.595954   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:26:59.641010   67282 cri.go:89] found id: ""
	I1004 04:26:59.641041   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.641049   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:26:59.641057   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:26:59.641102   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:26:59.679705   67282 cri.go:89] found id: ""
	I1004 04:26:59.679736   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.679746   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:26:59.679753   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:26:59.679828   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:26:59.715960   67282 cri.go:89] found id: ""
	I1004 04:26:59.715985   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.715993   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:26:59.715998   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:26:59.716047   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:26:59.757406   67282 cri.go:89] found id: ""
	I1004 04:26:59.757442   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.757453   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:26:59.757461   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:26:59.757528   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:26:59.792038   67282 cri.go:89] found id: ""
	I1004 04:26:59.792066   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.792076   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:26:59.792083   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:26:59.792141   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:26:59.830258   67282 cri.go:89] found id: ""
	I1004 04:26:59.830281   67282 logs.go:282] 0 containers: []
	W1004 04:26:59.830289   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:26:59.830296   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:26:59.830308   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:26:59.877273   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:26:59.877304   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:26:59.932570   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:26:59.932610   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:26:59.945896   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:26:59.945919   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:00.020363   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:00.020392   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:00.020412   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:02.601022   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:02.615039   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:02.615112   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:02.654541   67282 cri.go:89] found id: ""
	I1004 04:27:02.654567   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.654574   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:02.654579   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:02.654638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:02.691313   67282 cri.go:89] found id: ""
	I1004 04:27:02.691338   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.691349   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:02.691355   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:02.691414   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:02.735337   67282 cri.go:89] found id: ""
	I1004 04:27:02.735367   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.735376   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:02.735383   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:02.735486   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:02.769604   67282 cri.go:89] found id: ""
	I1004 04:27:02.769628   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.769638   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:02.769643   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:02.769704   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:02.812913   67282 cri.go:89] found id: ""
	I1004 04:27:02.812938   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.812949   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:02.812954   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:02.813020   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:02.849910   67282 cri.go:89] found id: ""
	I1004 04:27:02.849939   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.849949   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:02.849956   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:02.850023   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:02.889467   67282 cri.go:89] found id: ""
	I1004 04:27:02.889497   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.889509   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:02.889517   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:02.889575   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:02.928508   67282 cri.go:89] found id: ""
	I1004 04:27:02.928529   67282 logs.go:282] 0 containers: []
	W1004 04:27:02.928537   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:02.928545   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:02.928556   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:02.942783   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:02.942821   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:03.018282   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:03.018304   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:03.018314   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:03.101588   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:03.101622   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:03.149911   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:03.149937   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:01.051581   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.550066   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.646200   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:07.648479   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:03.622932   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.623005   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.121151   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:05.703125   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:05.717243   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:05.717303   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:05.752564   67282 cri.go:89] found id: ""
	I1004 04:27:05.752588   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.752597   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:05.752609   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:05.752656   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:05.786955   67282 cri.go:89] found id: ""
	I1004 04:27:05.786983   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.786994   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:05.787001   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:05.787073   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:05.823848   67282 cri.go:89] found id: ""
	I1004 04:27:05.823882   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.823893   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:05.823901   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:05.823970   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:05.866192   67282 cri.go:89] found id: ""
	I1004 04:27:05.866220   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.866238   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:05.866246   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:05.866305   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:05.904051   67282 cri.go:89] found id: ""
	I1004 04:27:05.904078   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.904089   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:05.904096   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:05.904154   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:05.940041   67282 cri.go:89] found id: ""
	I1004 04:27:05.940075   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.940085   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:05.940092   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:05.940158   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:05.975758   67282 cri.go:89] found id: ""
	I1004 04:27:05.975799   67282 logs.go:282] 0 containers: []
	W1004 04:27:05.975810   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:05.975818   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:05.975892   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:06.011044   67282 cri.go:89] found id: ""
	I1004 04:27:06.011086   67282 logs.go:282] 0 containers: []
	W1004 04:27:06.011096   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:06.011105   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:06.011116   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:06.024900   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:06.024937   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:06.109932   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:06.109960   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:06.109976   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:06.189517   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:06.189557   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:06.230019   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:06.230048   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:06.050004   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.548768   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:10.147814   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.646430   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:10.122097   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.123967   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:08.785355   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:08.799156   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:08.799218   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:08.843606   67282 cri.go:89] found id: ""
	I1004 04:27:08.843634   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.843643   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:08.843648   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:08.843698   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:08.884418   67282 cri.go:89] found id: ""
	I1004 04:27:08.884443   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.884450   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:08.884456   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:08.884503   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:08.925878   67282 cri.go:89] found id: ""
	I1004 04:27:08.925906   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.925914   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:08.925920   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:08.925970   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:08.966127   67282 cri.go:89] found id: ""
	I1004 04:27:08.966157   67282 logs.go:282] 0 containers: []
	W1004 04:27:08.966167   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:08.966173   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:08.966227   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:09.010646   67282 cri.go:89] found id: ""
	I1004 04:27:09.010672   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.010682   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:09.010702   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:09.010769   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:09.049738   67282 cri.go:89] found id: ""
	I1004 04:27:09.049761   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.049768   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:09.049774   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:09.049825   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:09.082709   67282 cri.go:89] found id: ""
	I1004 04:27:09.082739   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.082747   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:09.082752   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:09.082808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:09.120574   67282 cri.go:89] found id: ""
	I1004 04:27:09.120605   67282 logs.go:282] 0 containers: []
	W1004 04:27:09.120617   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:09.120626   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:09.120636   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:09.202880   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:09.202922   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:09.242668   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:09.242700   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:09.298662   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:09.298703   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:09.314832   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:09.314868   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:09.389062   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:11.889645   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:11.902953   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:11.903012   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:11.939846   67282 cri.go:89] found id: ""
	I1004 04:27:11.939874   67282 logs.go:282] 0 containers: []
	W1004 04:27:11.939882   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:11.939888   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:11.939936   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:11.975281   67282 cri.go:89] found id: ""
	I1004 04:27:11.975303   67282 logs.go:282] 0 containers: []
	W1004 04:27:11.975311   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:11.975317   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:11.975370   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:12.011400   67282 cri.go:89] found id: ""
	I1004 04:27:12.011428   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.011438   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:12.011443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:12.011506   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:12.046862   67282 cri.go:89] found id: ""
	I1004 04:27:12.046889   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.046898   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:12.046905   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:12.046960   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:12.081537   67282 cri.go:89] found id: ""
	I1004 04:27:12.081569   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.081581   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:12.081590   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:12.081655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:12.121982   67282 cri.go:89] found id: ""
	I1004 04:27:12.122010   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.122021   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:12.122028   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:12.122086   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:12.161419   67282 cri.go:89] found id: ""
	I1004 04:27:12.161460   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.161473   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:12.161481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:12.161549   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:12.202188   67282 cri.go:89] found id: ""
	I1004 04:27:12.202230   67282 logs.go:282] 0 containers: []
	W1004 04:27:12.202242   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:12.202253   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:12.202267   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:12.253424   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:12.253462   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:12.268116   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:12.268141   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:12.337788   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:12.337814   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:12.337826   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:12.417359   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:12.417395   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:10.549097   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:12.549239   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.647267   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:17.147526   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.623050   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:16.623702   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:14.959596   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:14.973031   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:14.973090   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:15.011451   67282 cri.go:89] found id: ""
	I1004 04:27:15.011487   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.011497   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:15.011513   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:15.011572   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:15.055767   67282 cri.go:89] found id: ""
	I1004 04:27:15.055817   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.055829   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:15.055836   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:15.055915   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:15.096357   67282 cri.go:89] found id: ""
	I1004 04:27:15.096385   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.096394   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:15.096399   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:15.096456   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:15.131824   67282 cri.go:89] found id: ""
	I1004 04:27:15.131853   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.131863   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:15.131870   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:15.131932   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:15.169250   67282 cri.go:89] found id: ""
	I1004 04:27:15.169285   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.169299   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:15.169307   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:15.169373   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:15.206852   67282 cri.go:89] found id: ""
	I1004 04:27:15.206881   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.206889   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:15.206895   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:15.206949   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:15.241392   67282 cri.go:89] found id: ""
	I1004 04:27:15.241421   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.241431   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:15.241439   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:15.241498   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:15.280697   67282 cri.go:89] found id: ""
	I1004 04:27:15.280723   67282 logs.go:282] 0 containers: []
	W1004 04:27:15.280734   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:15.280744   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:15.280758   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:15.361681   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:15.361716   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:15.404640   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:15.404676   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:15.457287   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:15.457326   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:15.471162   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:15.471188   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:15.544157   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:18.045094   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:18.060228   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:18.060310   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:18.096659   67282 cri.go:89] found id: ""
	I1004 04:27:18.096688   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.096697   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:18.096703   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:18.096757   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:18.135538   67282 cri.go:89] found id: ""
	I1004 04:27:18.135565   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.135573   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:18.135579   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:18.135629   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:18.171051   67282 cri.go:89] found id: ""
	I1004 04:27:18.171082   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.171098   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:18.171106   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:18.171168   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:18.205696   67282 cri.go:89] found id: ""
	I1004 04:27:18.205725   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.205735   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:18.205742   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:18.205803   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:18.240545   67282 cri.go:89] found id: ""
	I1004 04:27:18.240566   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.240576   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:18.240584   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:18.240638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:18.279185   67282 cri.go:89] found id: ""
	I1004 04:27:18.279221   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.279232   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:18.279239   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:18.279310   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:18.318395   67282 cri.go:89] found id: ""
	I1004 04:27:18.318417   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.318424   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:18.318430   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:18.318476   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:18.352367   67282 cri.go:89] found id: ""
	I1004 04:27:18.352390   67282 logs.go:282] 0 containers: []
	W1004 04:27:18.352398   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:18.352407   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:18.352420   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:18.365604   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:18.365637   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:18.438407   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:18.438427   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:18.438438   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:14.549690   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:16.550244   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:18.550355   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:19.647031   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:22.147826   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:19.126090   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:21.623910   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:18.513645   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:18.513679   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:18.557224   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:18.557250   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:21.111005   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:21.126573   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:21.126631   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:21.161161   67282 cri.go:89] found id: ""
	I1004 04:27:21.161190   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.161201   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:21.161207   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:21.161258   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:21.199517   67282 cri.go:89] found id: ""
	I1004 04:27:21.199544   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.199555   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:21.199562   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:21.199625   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:21.236210   67282 cri.go:89] found id: ""
	I1004 04:27:21.236238   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.236246   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:21.236251   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:21.236311   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:21.272720   67282 cri.go:89] found id: ""
	I1004 04:27:21.272746   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.272753   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:21.272759   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:21.272808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:21.311439   67282 cri.go:89] found id: ""
	I1004 04:27:21.311474   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.311484   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:21.311491   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:21.311551   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:21.360400   67282 cri.go:89] found id: ""
	I1004 04:27:21.360427   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.360436   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:21.360443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:21.360511   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:21.394627   67282 cri.go:89] found id: ""
	I1004 04:27:21.394656   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.394667   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:21.394673   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:21.394721   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:21.429736   67282 cri.go:89] found id: ""
	I1004 04:27:21.429762   67282 logs.go:282] 0 containers: []
	W1004 04:27:21.429770   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:21.429778   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:21.429789   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:21.482773   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:21.482808   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:21.497570   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:21.497595   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:21.582335   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:21.582355   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:21.582367   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:21.662196   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:21.662230   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:21.050000   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:23.050516   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.647074   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:27.147999   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.123142   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:26.624049   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:24.205743   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:24.222878   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:24.222951   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:24.263410   67282 cri.go:89] found id: ""
	I1004 04:27:24.263450   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.263462   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:24.263469   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:24.263532   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:24.306892   67282 cri.go:89] found id: ""
	I1004 04:27:24.306923   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.306934   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:24.306941   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:24.307008   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:24.345522   67282 cri.go:89] found id: ""
	I1004 04:27:24.345559   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.345571   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:24.345579   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:24.345638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:24.384893   67282 cri.go:89] found id: ""
	I1004 04:27:24.384918   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.384925   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:24.384931   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:24.384978   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:24.420998   67282 cri.go:89] found id: ""
	I1004 04:27:24.421025   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.421036   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:24.421043   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:24.421105   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:24.456277   67282 cri.go:89] found id: ""
	I1004 04:27:24.456305   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.456315   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:24.456322   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:24.456383   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:24.497852   67282 cri.go:89] found id: ""
	I1004 04:27:24.497881   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.497892   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:24.497900   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:24.497960   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:24.538702   67282 cri.go:89] found id: ""
	I1004 04:27:24.538736   67282 logs.go:282] 0 containers: []
	W1004 04:27:24.538755   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:24.538766   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:24.538778   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:24.553747   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:24.553773   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:24.638059   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:24.638081   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:24.638093   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:24.718165   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:24.718212   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:24.759770   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:24.759811   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:27.311684   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:27.327493   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:27.327570   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:27.362804   67282 cri.go:89] found id: ""
	I1004 04:27:27.362827   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.362836   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:27.362841   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:27.362888   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:27.401576   67282 cri.go:89] found id: ""
	I1004 04:27:27.401604   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.401614   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:27.401621   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:27.401682   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:27.445152   67282 cri.go:89] found id: ""
	I1004 04:27:27.445177   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.445187   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:27.445193   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:27.445240   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:27.482710   67282 cri.go:89] found id: ""
	I1004 04:27:27.482734   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.482742   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:27.482749   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:27.482808   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:27.519459   67282 cri.go:89] found id: ""
	I1004 04:27:27.519488   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.519498   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:27.519505   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:27.519569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:27.559381   67282 cri.go:89] found id: ""
	I1004 04:27:27.559407   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.559417   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:27.559423   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:27.559468   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:27.609040   67282 cri.go:89] found id: ""
	I1004 04:27:27.609068   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.609076   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:27.609081   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:27.609128   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:27.654537   67282 cri.go:89] found id: ""
	I1004 04:27:27.654569   67282 logs.go:282] 0 containers: []
	W1004 04:27:27.654579   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:27.654590   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:27.654603   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:27.709062   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:27.709098   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:27.722931   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:27.722955   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:27.796863   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:27.796884   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:27.796895   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:27.879840   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:27.879876   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:25.549643   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:27.551373   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:29.646879   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:31.646956   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:29.122087   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:31.122774   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:30.423644   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:30.439256   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:30.439311   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:30.479612   67282 cri.go:89] found id: ""
	I1004 04:27:30.479640   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.479648   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:30.479654   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:30.479750   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:30.522846   67282 cri.go:89] found id: ""
	I1004 04:27:30.522879   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.522890   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:30.522898   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:30.522946   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:30.558935   67282 cri.go:89] found id: ""
	I1004 04:27:30.558962   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.558971   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:30.558976   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:30.559032   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:30.603383   67282 cri.go:89] found id: ""
	I1004 04:27:30.603411   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.603421   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:30.603428   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:30.603492   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:30.644700   67282 cri.go:89] found id: ""
	I1004 04:27:30.644727   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.644737   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:30.644744   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:30.644799   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:30.680328   67282 cri.go:89] found id: ""
	I1004 04:27:30.680358   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.680367   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:30.680372   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:30.680419   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:30.717973   67282 cri.go:89] found id: ""
	I1004 04:27:30.717995   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.718005   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:30.718021   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:30.718082   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:30.755838   67282 cri.go:89] found id: ""
	I1004 04:27:30.755866   67282 logs.go:282] 0 containers: []
	W1004 04:27:30.755874   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:30.755882   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:30.755893   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:30.809999   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:30.810036   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:30.824447   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:30.824491   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:30.902008   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:30.902030   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:30.902043   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:30.986938   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:30.986984   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:30.049983   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:32.050033   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:34.050671   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.647707   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:36.146619   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.624575   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:36.122046   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:33.531108   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:33.546681   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:33.546759   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:33.586444   67282 cri.go:89] found id: ""
	I1004 04:27:33.586469   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.586479   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:33.586486   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:33.586552   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:33.629340   67282 cri.go:89] found id: ""
	I1004 04:27:33.629365   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.629373   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:33.629378   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:33.629429   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:33.668446   67282 cri.go:89] found id: ""
	I1004 04:27:33.668473   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.668483   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:33.668490   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:33.668548   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:33.706287   67282 cri.go:89] found id: ""
	I1004 04:27:33.706312   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.706320   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:33.706327   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:33.706385   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:33.746161   67282 cri.go:89] found id: ""
	I1004 04:27:33.746189   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.746200   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:33.746207   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:33.746270   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:33.782157   67282 cri.go:89] found id: ""
	I1004 04:27:33.782184   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.782194   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:33.782200   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:33.782262   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:33.820332   67282 cri.go:89] found id: ""
	I1004 04:27:33.820361   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.820371   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:33.820378   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:33.820437   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:33.859431   67282 cri.go:89] found id: ""
	I1004 04:27:33.859458   67282 logs.go:282] 0 containers: []
	W1004 04:27:33.859467   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:33.859475   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:33.859485   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:33.910259   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:33.910292   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:33.925149   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:33.925177   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:34.006153   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:34.006187   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:34.006202   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:34.115882   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:34.115916   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:36.662964   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:36.677071   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:36.677139   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:36.720785   67282 cri.go:89] found id: ""
	I1004 04:27:36.720807   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.720818   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:36.720826   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:36.720875   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:36.757535   67282 cri.go:89] found id: ""
	I1004 04:27:36.757563   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.757574   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:36.757582   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:36.757630   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:36.800989   67282 cri.go:89] found id: ""
	I1004 04:27:36.801024   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.801038   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:36.801046   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:36.801112   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:36.837101   67282 cri.go:89] found id: ""
	I1004 04:27:36.837122   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.837131   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:36.837136   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:36.837181   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:36.876325   67282 cri.go:89] found id: ""
	I1004 04:27:36.876358   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.876370   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:36.876379   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:36.876444   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:36.914720   67282 cri.go:89] found id: ""
	I1004 04:27:36.914749   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.914759   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:36.914767   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:36.914828   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:36.949672   67282 cri.go:89] found id: ""
	I1004 04:27:36.949694   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.949701   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:36.949706   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:36.949754   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:36.983374   67282 cri.go:89] found id: ""
	I1004 04:27:36.983406   67282 logs.go:282] 0 containers: []
	W1004 04:27:36.983416   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:36.983427   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:36.983440   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:37.039040   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:37.039075   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:37.054873   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:37.054898   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:37.131537   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:37.131562   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:37.131577   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:37.213958   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:37.213990   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:36.548751   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:39.049804   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:38.646028   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:40.646213   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:42.648505   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:38.623560   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:40.623721   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:43.122033   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:39.754264   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:39.771465   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:39.771545   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:39.829530   67282 cri.go:89] found id: ""
	I1004 04:27:39.829560   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.829572   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:39.829580   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:39.829639   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:39.876055   67282 cri.go:89] found id: ""
	I1004 04:27:39.876078   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.876090   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:39.876095   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:39.876142   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:39.913304   67282 cri.go:89] found id: ""
	I1004 04:27:39.913327   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.913335   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:39.913340   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:39.913389   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:39.948821   67282 cri.go:89] found id: ""
	I1004 04:27:39.948847   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.948855   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:39.948862   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:39.948916   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:39.986994   67282 cri.go:89] found id: ""
	I1004 04:27:39.987023   67282 logs.go:282] 0 containers: []
	W1004 04:27:39.987034   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:39.987041   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:39.987141   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:40.026627   67282 cri.go:89] found id: ""
	I1004 04:27:40.026656   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.026668   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:40.026675   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:40.026734   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:40.067028   67282 cri.go:89] found id: ""
	I1004 04:27:40.067068   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.067079   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:40.067086   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:40.067144   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:40.105638   67282 cri.go:89] found id: ""
	I1004 04:27:40.105667   67282 logs.go:282] 0 containers: []
	W1004 04:27:40.105677   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:40.105694   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:40.105707   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:40.159425   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:40.159467   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:40.175045   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:40.175073   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:40.261967   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:40.261989   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:40.262002   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:40.345317   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:40.345354   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:42.888115   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:42.901889   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:42.901948   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:42.938556   67282 cri.go:89] found id: ""
	I1004 04:27:42.938587   67282 logs.go:282] 0 containers: []
	W1004 04:27:42.938597   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:42.938604   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:42.938668   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:42.974569   67282 cri.go:89] found id: ""
	I1004 04:27:42.974595   67282 logs.go:282] 0 containers: []
	W1004 04:27:42.974606   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:42.974613   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:42.974679   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:43.010552   67282 cri.go:89] found id: ""
	I1004 04:27:43.010581   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.010593   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:43.010600   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:43.010655   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:43.046204   67282 cri.go:89] found id: ""
	I1004 04:27:43.046237   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.046247   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:43.046254   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:43.046313   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:43.081612   67282 cri.go:89] found id: ""
	I1004 04:27:43.081644   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.081655   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:43.081662   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:43.081729   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:43.121103   67282 cri.go:89] found id: ""
	I1004 04:27:43.121126   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.121133   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:43.121139   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:43.121191   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:43.157104   67282 cri.go:89] found id: ""
	I1004 04:27:43.157128   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.157136   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:43.157141   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:43.157196   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:43.198927   67282 cri.go:89] found id: ""
	I1004 04:27:43.198951   67282 logs.go:282] 0 containers: []
	W1004 04:27:43.198958   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:43.198966   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:43.198975   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:43.254534   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:43.254563   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:43.268106   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:43.268130   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:43.344382   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:43.344410   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:43.344425   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:43.426916   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:43.426948   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:41.549364   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:43.549590   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.146452   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:47.148300   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.126135   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:47.622568   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:45.966806   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:45.980187   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:45.980252   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:46.014196   67282 cri.go:89] found id: ""
	I1004 04:27:46.014220   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.014228   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:46.014233   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:46.014295   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:46.053910   67282 cri.go:89] found id: ""
	I1004 04:27:46.053940   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.053951   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:46.053957   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:46.054013   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:46.087896   67282 cri.go:89] found id: ""
	I1004 04:27:46.087921   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.087930   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:46.087936   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:46.087985   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:46.123441   67282 cri.go:89] found id: ""
	I1004 04:27:46.123465   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.123475   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:46.123481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:46.123545   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:46.159664   67282 cri.go:89] found id: ""
	I1004 04:27:46.159688   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.159698   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:46.159704   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:46.159761   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:46.195474   67282 cri.go:89] found id: ""
	I1004 04:27:46.195501   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.195512   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:46.195525   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:46.195569   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:46.228670   67282 cri.go:89] found id: ""
	I1004 04:27:46.228693   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.228701   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:46.228706   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:46.228759   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:46.265278   67282 cri.go:89] found id: ""
	I1004 04:27:46.265303   67282 logs.go:282] 0 containers: []
	W1004 04:27:46.265311   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:46.265325   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:46.265338   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:46.315135   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:46.315163   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:46.327765   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:46.327797   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:46.393157   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:46.393173   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:46.393184   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:46.473026   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:46.473058   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:46.049285   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:48.549053   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:49.647027   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:52.146841   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:50.122921   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:52.622913   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:49.011972   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:49.025718   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:49.025783   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:49.062749   67282 cri.go:89] found id: ""
	I1004 04:27:49.062774   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.062782   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:49.062788   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:49.062844   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:49.100838   67282 cri.go:89] found id: ""
	I1004 04:27:49.100886   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.100897   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:49.100904   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:49.100961   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:49.139966   67282 cri.go:89] found id: ""
	I1004 04:27:49.139990   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.140000   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:49.140007   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:49.140088   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:49.179347   67282 cri.go:89] found id: ""
	I1004 04:27:49.179373   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.179384   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:49.179391   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:49.179435   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:49.218086   67282 cri.go:89] found id: ""
	I1004 04:27:49.218112   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.218121   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:49.218127   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:49.218181   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:49.254779   67282 cri.go:89] found id: ""
	I1004 04:27:49.254811   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.254823   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:49.254830   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:49.254888   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:49.287351   67282 cri.go:89] found id: ""
	I1004 04:27:49.287381   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.287392   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:49.287398   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:49.287456   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:49.320051   67282 cri.go:89] found id: ""
	I1004 04:27:49.320078   67282 logs.go:282] 0 containers: []
	W1004 04:27:49.320089   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:49.320100   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:49.320112   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:49.371270   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:49.371300   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:49.384403   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:49.384432   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:49.468132   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:49.468154   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:49.468167   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:49.543179   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:49.543211   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:52.093235   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:52.108446   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:52.108520   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:52.147590   67282 cri.go:89] found id: ""
	I1004 04:27:52.147613   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.147620   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:52.147626   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:52.147677   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:52.183066   67282 cri.go:89] found id: ""
	I1004 04:27:52.183095   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.183105   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:52.183112   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:52.183170   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:52.223109   67282 cri.go:89] found id: ""
	I1004 04:27:52.223140   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.223154   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:52.223165   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:52.223223   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:52.259547   67282 cri.go:89] found id: ""
	I1004 04:27:52.259573   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.259582   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:52.259587   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:52.259638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:52.296934   67282 cri.go:89] found id: ""
	I1004 04:27:52.296961   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.296971   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:52.296979   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:52.297040   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:52.331650   67282 cri.go:89] found id: ""
	I1004 04:27:52.331671   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.331679   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:52.331684   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:52.331728   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:52.365111   67282 cri.go:89] found id: ""
	I1004 04:27:52.365139   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.365150   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:52.365157   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:52.365239   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:52.400974   67282 cri.go:89] found id: ""
	I1004 04:27:52.401010   67282 logs.go:282] 0 containers: []
	W1004 04:27:52.401023   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:52.401035   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:52.401049   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:52.484732   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:52.484771   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:52.523322   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:52.523348   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:52.576671   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:52.576702   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:52.590263   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:52.590291   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:52.666646   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:50.549475   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:53.049259   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:54.646262   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.153196   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:55.123174   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.123932   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:55.166856   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:55.181481   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:55.181562   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:55.218023   67282 cri.go:89] found id: ""
	I1004 04:27:55.218048   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.218056   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:55.218063   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:55.218121   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:55.256439   67282 cri.go:89] found id: ""
	I1004 04:27:55.256464   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.256472   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:55.256477   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:55.256531   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:55.294563   67282 cri.go:89] found id: ""
	I1004 04:27:55.294588   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.294596   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:55.294601   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:55.294656   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:55.331266   67282 cri.go:89] found id: ""
	I1004 04:27:55.331290   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.331300   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:55.331306   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:55.331370   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:55.367286   67282 cri.go:89] found id: ""
	I1004 04:27:55.367314   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.367325   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:55.367332   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:55.367391   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:55.402031   67282 cri.go:89] found id: ""
	I1004 04:27:55.402054   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.402062   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:55.402068   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:55.402122   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:55.437737   67282 cri.go:89] found id: ""
	I1004 04:27:55.437764   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.437774   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:55.437780   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:55.437842   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:55.470654   67282 cri.go:89] found id: ""
	I1004 04:27:55.470692   67282 logs.go:282] 0 containers: []
	W1004 04:27:55.470704   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:55.470713   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:55.470726   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:55.521364   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:55.521393   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:55.534691   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:55.534716   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:55.600902   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:27:55.600923   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:55.600933   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:55.678896   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:55.678940   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:58.220086   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:27:58.234049   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:27:58.234110   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:27:58.281112   67282 cri.go:89] found id: ""
	I1004 04:27:58.281135   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.281143   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:27:58.281148   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:27:58.281191   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:27:58.320549   67282 cri.go:89] found id: ""
	I1004 04:27:58.320575   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.320584   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:27:58.320589   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:27:58.320635   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:27:58.355139   67282 cri.go:89] found id: ""
	I1004 04:27:58.355166   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.355174   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:27:58.355179   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:27:58.355225   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:27:58.387809   67282 cri.go:89] found id: ""
	I1004 04:27:58.387836   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.387846   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:27:58.387851   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:27:58.387908   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:27:58.420264   67282 cri.go:89] found id: ""
	I1004 04:27:58.420287   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.420295   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:27:58.420300   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:27:58.420349   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:27:58.455409   67282 cri.go:89] found id: ""
	I1004 04:27:58.455431   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.455438   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:27:58.455443   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:27:58.455487   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:27:58.488708   67282 cri.go:89] found id: ""
	I1004 04:27:58.488734   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.488742   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:27:58.488749   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:27:58.488797   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:27:55.051622   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:57.548584   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:59.646699   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:01.648277   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:59.623008   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:02.122303   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:27:58.522139   67282 cri.go:89] found id: ""
	I1004 04:27:58.522161   67282 logs.go:282] 0 containers: []
	W1004 04:27:58.522169   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:27:58.522176   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:27:58.522187   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:27:58.604653   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:27:58.604683   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:58.645141   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:27:58.645169   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:27:58.699716   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:27:58.699748   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:27:58.713197   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:27:58.713228   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:27:58.781998   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:01.282429   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:01.297266   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:01.297343   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:01.330421   67282 cri.go:89] found id: ""
	I1004 04:28:01.330446   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.330454   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:28:01.330459   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:01.330514   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:01.366960   67282 cri.go:89] found id: ""
	I1004 04:28:01.366983   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.366992   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:28:01.366998   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:01.367067   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:01.400886   67282 cri.go:89] found id: ""
	I1004 04:28:01.400910   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.400920   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:28:01.400931   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:01.400987   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:01.435556   67282 cri.go:89] found id: ""
	I1004 04:28:01.435586   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.435594   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:28:01.435601   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:01.435649   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:01.475772   67282 cri.go:89] found id: ""
	I1004 04:28:01.475810   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.475820   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:28:01.475826   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:01.475884   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:01.512380   67282 cri.go:89] found id: ""
	I1004 04:28:01.512403   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.512411   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:28:01.512417   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:01.512465   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:01.550488   67282 cri.go:89] found id: ""
	I1004 04:28:01.550517   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.550528   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:01.550536   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:28:01.550595   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:28:01.586216   67282 cri.go:89] found id: ""
	I1004 04:28:01.586249   67282 logs.go:282] 0 containers: []
	W1004 04:28:01.586261   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:28:01.586271   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:01.586285   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:01.640819   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:01.640860   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:01.656990   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:01.657020   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:28:01.731326   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:01.731354   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:01.731368   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:01.810007   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:28:01.810044   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:27:59.548748   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:01.551035   66755 pod_ready.go:103] pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.043116   66755 pod_ready.go:82] duration metric: took 4m0.000354814s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" ...
	E1004 04:28:04.043143   66755 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-d5b6b" in "kube-system" namespace to be "Ready" (will not retry!)
	I1004 04:28:04.043167   66755 pod_ready.go:39] duration metric: took 4m15.403862245s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:04.043219   66755 kubeadm.go:597] duration metric: took 4m23.226496183s to restartPrimaryControlPlane
	W1004 04:28:04.043288   66755 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1004 04:28:04.043316   66755 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:28:04.146697   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:06.147038   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:08.147201   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.122463   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:06.622379   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:04.352648   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:04.366150   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:04.366227   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:04.403272   67282 cri.go:89] found id: ""
	I1004 04:28:04.403298   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.403308   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:28:04.403315   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:04.403371   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:04.439237   67282 cri.go:89] found id: ""
	I1004 04:28:04.439269   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.439280   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:28:04.439287   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:04.439345   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:04.475532   67282 cri.go:89] found id: ""
	I1004 04:28:04.475558   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.475569   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:28:04.475576   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:04.475638   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:04.511738   67282 cri.go:89] found id: ""
	I1004 04:28:04.511765   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.511775   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:28:04.511792   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:04.511850   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:04.553536   67282 cri.go:89] found id: ""
	I1004 04:28:04.553561   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.553568   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:28:04.553574   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:04.553625   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:04.589016   67282 cri.go:89] found id: ""
	I1004 04:28:04.589044   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.589053   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:28:04.589058   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:04.589106   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:04.622780   67282 cri.go:89] found id: ""
	I1004 04:28:04.622808   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.622817   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:04.622823   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:28:04.622879   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:28:04.662620   67282 cri.go:89] found id: ""
	I1004 04:28:04.662641   67282 logs.go:282] 0 containers: []
	W1004 04:28:04.662649   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:28:04.662659   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:04.662669   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:04.717894   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:04.717928   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:04.732353   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:04.732385   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:28:04.806443   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:28:04.806469   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:04.806492   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:04.887684   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:28:04.887717   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:07.426630   67282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:07.440242   67282 kubeadm.go:597] duration metric: took 4m3.475062199s to restartPrimaryControlPlane
	W1004 04:28:07.440318   67282 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1004 04:28:07.440346   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:28:08.147532   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:08.162175   67282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:28:08.172013   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:28:08.181741   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:28:08.181757   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:28:08.181801   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:28:08.191002   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:28:08.191046   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:28:08.200929   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:28:08.210241   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:28:08.210286   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:28:08.219693   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:28:08.229497   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:28:08.229534   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:28:08.239583   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:28:08.249207   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:28:08.249252   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:28:08.258516   67282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:28:08.328054   67282 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:28:08.328132   67282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:28:08.472265   67282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:28:08.472420   67282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:28:08.472543   67282 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:28:08.655873   67282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:28:08.657726   67282 out.go:235]   - Generating certificates and keys ...
	I1004 04:28:08.657817   67282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:28:08.657876   67282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:28:08.657942   67282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:28:08.658034   67282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:28:08.658149   67282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:28:08.658235   67282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:28:08.658309   67282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:28:08.658396   67282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:28:08.658503   67282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:28:08.658600   67282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:28:08.658651   67282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:28:08.658707   67282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:28:08.706486   67282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:28:08.909036   67282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:28:09.285968   67282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:28:09.499963   67282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:28:09.516914   67282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:28:09.517832   67282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:28:09.517900   67282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:28:09.664925   67282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:28:10.147391   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:12.646012   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:09.121686   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:11.123086   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:13.123578   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:09.666691   67282 out.go:235]   - Booting up control plane ...
	I1004 04:28:09.666889   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:28:09.671298   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:28:09.672046   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:28:09.672956   67282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:28:09.685069   67282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:28:14.646614   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:16.646683   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:15.125374   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:17.125685   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:18.646777   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:21.147299   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:19.623872   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:22.123077   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:23.646460   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:25.647096   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:28.147324   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:24.623730   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:27.123516   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:30.379460   66755 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.336110507s)
	I1004 04:28:30.379544   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:30.395622   66755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 04:28:30.406790   66755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:28:30.417380   66755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:28:30.417408   66755 kubeadm.go:157] found existing configuration files:
	
	I1004 04:28:30.417458   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:28:30.427925   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:28:30.427993   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:28:30.438694   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:28:30.448898   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:28:30.448972   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:28:30.459463   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:28:30.469227   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:28:30.469281   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:28:30.479979   66755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:28:30.489873   66755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:28:30.489936   66755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:28:30.499999   66755 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:28:30.549707   66755 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 04:28:30.549771   66755 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:28:30.663468   66755 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:28:30.663595   66755 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:28:30.663698   66755 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 04:28:30.675750   66755 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:28:30.677655   66755 out.go:235]   - Generating certificates and keys ...
	I1004 04:28:30.677760   66755 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:28:30.677868   66755 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:28:30.678010   66755 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:28:30.678102   66755 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:28:30.678217   66755 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:28:30.678289   66755 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:28:30.678378   66755 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:28:30.678470   66755 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:28:30.678566   66755 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:28:30.678732   66755 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:28:30.679295   66755 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:28:30.679383   66755 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:28:30.826979   66755 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:28:30.900919   66755 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 04:28:31.098221   66755 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:28:31.243668   66755 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:28:31.411766   66755 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:28:31.412181   66755 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:28:31.414652   66755 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:28:30.646927   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:32.647767   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:29.129148   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:31.623284   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:31.416504   66755 out.go:235]   - Booting up control plane ...
	I1004 04:28:31.416620   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:28:31.416730   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:28:31.418284   66755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:28:31.437379   66755 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:28:31.443450   66755 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:28:31.443505   66755 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:28:31.586540   66755 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 04:28:31.586706   66755 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 04:28:32.088382   66755 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.195244ms
	I1004 04:28:32.088510   66755 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 04:28:37.090291   66755 kubeadm.go:310] [api-check] The API server is healthy after 5.001756025s
	I1004 04:28:37.103845   66755 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 04:28:37.127230   66755 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 04:28:37.156917   66755 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 04:28:37.157181   66755 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-934812 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 04:28:37.171399   66755 kubeadm.go:310] [bootstrap-token] Using token: 1wt5ey.lvccf2aeyngf9mt3
	I1004 04:28:34.648249   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:37.148680   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:33.623901   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:36.122762   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:38.123147   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:37.172939   66755 out.go:235]   - Configuring RBAC rules ...
	I1004 04:28:37.173086   66755 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 04:28:37.179454   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 04:28:37.188765   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 04:28:37.192599   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 04:28:37.200359   66755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 04:28:37.204872   66755 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 04:28:37.498753   66755 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 04:28:37.931621   66755 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 04:28:38.497855   66755 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 04:28:38.498949   66755 kubeadm.go:310] 
	I1004 04:28:38.499023   66755 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 04:28:38.499055   66755 kubeadm.go:310] 
	I1004 04:28:38.499183   66755 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 04:28:38.499195   66755 kubeadm.go:310] 
	I1004 04:28:38.499229   66755 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 04:28:38.499316   66755 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 04:28:38.499385   66755 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 04:28:38.499393   66755 kubeadm.go:310] 
	I1004 04:28:38.499481   66755 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 04:28:38.499498   66755 kubeadm.go:310] 
	I1004 04:28:38.499563   66755 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 04:28:38.499571   66755 kubeadm.go:310] 
	I1004 04:28:38.499653   66755 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 04:28:38.499742   66755 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 04:28:38.499871   66755 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 04:28:38.499888   66755 kubeadm.go:310] 
	I1004 04:28:38.499994   66755 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 04:28:38.500104   66755 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 04:28:38.500115   66755 kubeadm.go:310] 
	I1004 04:28:38.500220   66755 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1wt5ey.lvccf2aeyngf9mt3 \
	I1004 04:28:38.500350   66755 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 \
	I1004 04:28:38.500387   66755 kubeadm.go:310] 	--control-plane 
	I1004 04:28:38.500402   66755 kubeadm.go:310] 
	I1004 04:28:38.500478   66755 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 04:28:38.500484   66755 kubeadm.go:310] 
	I1004 04:28:38.500563   66755 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1wt5ey.lvccf2aeyngf9mt3 \
	I1004 04:28:38.500686   66755 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de55c919a3ef6c303c4c51ab2397ed74febe47705cfe7f7e47be594e33527e73 
	I1004 04:28:38.501820   66755 kubeadm.go:310] W1004 04:28:30.522396    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:28:38.502147   66755 kubeadm.go:310] W1004 04:28:30.524006    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 04:28:38.502282   66755 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:28:38.502311   66755 cni.go:84] Creating CNI manager for ""
	I1004 04:28:38.502321   66755 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 04:28:38.504185   66755 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 04:28:38.505600   66755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 04:28:38.518746   66755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1004 04:28:38.541311   66755 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 04:28:38.541422   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:38.541460   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-934812 minikube.k8s.io/updated_at=2024_10_04T04_28_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=embed-certs-934812 minikube.k8s.io/primary=true
	I1004 04:28:38.605537   66755 ops.go:34] apiserver oom_adj: -16
	I1004 04:28:38.765084   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:39.646916   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:41.651456   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:39.265365   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:39.765925   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:40.265135   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:40.766204   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:41.265734   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:41.765404   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:42.265993   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:42.765826   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:43.265776   66755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 04:28:43.353243   66755 kubeadm.go:1113] duration metric: took 4.811892444s to wait for elevateKubeSystemPrivileges
	I1004 04:28:43.353288   66755 kubeadm.go:394] duration metric: took 5m2.586827656s to StartCluster
	I1004 04:28:43.353313   66755 settings.go:142] acquiring lock: {Name:mk688b98cf0b8c6f4800e7cf045416678effbe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:28:43.353402   66755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 04:28:43.355058   66755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-9647/kubeconfig: {Name:mk3288430256d71bbed23500c908d51783591a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 04:28:43.355309   66755 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.74 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 04:28:43.355388   66755 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 04:28:43.355533   66755 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-934812"
	I1004 04:28:43.355542   66755 addons.go:69] Setting default-storageclass=true in profile "embed-certs-934812"
	I1004 04:28:43.355556   66755 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-934812"
	I1004 04:28:43.355563   66755 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-934812"
	W1004 04:28:43.355568   66755 addons.go:243] addon storage-provisioner should already be in state true
	I1004 04:28:43.355584   66755 config.go:182] Loaded profile config "embed-certs-934812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 04:28:43.355598   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.355639   66755 addons.go:69] Setting metrics-server=true in profile "embed-certs-934812"
	I1004 04:28:43.355658   66755 addons.go:234] Setting addon metrics-server=true in "embed-certs-934812"
	W1004 04:28:43.355666   66755 addons.go:243] addon metrics-server should already be in state true
	I1004 04:28:43.355694   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.356024   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356075   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356095   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.356108   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.356075   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.356173   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.357087   66755 out.go:177] * Verifying Kubernetes components...
	I1004 04:28:43.358428   66755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 04:28:43.373646   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45715
	I1004 04:28:43.373874   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I1004 04:28:43.374344   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.374344   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.374927   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.374948   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.375003   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.375027   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.375285   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.375342   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.375499   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.375884   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.375928   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.376269   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I1004 04:28:43.376636   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.377073   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.377099   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.377455   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.377883   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.377918   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.378402   66755 addons.go:234] Setting addon default-storageclass=true in "embed-certs-934812"
	W1004 04:28:43.378420   66755 addons.go:243] addon default-storageclass should already be in state true
	I1004 04:28:43.378447   66755 host.go:66] Checking if "embed-certs-934812" exists ...
	I1004 04:28:43.378705   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.378734   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.394001   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I1004 04:28:43.394289   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I1004 04:28:43.394645   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.394760   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.395195   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.395213   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.395302   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.395317   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.395596   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.395626   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.395842   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.396120   66755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 04:28:43.396160   66755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 04:28:43.397590   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.399391   66755 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 04:28:43.400581   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 04:28:43.400598   66755 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 04:28:43.400619   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.405134   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.405778   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I1004 04:28:43.405968   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.405996   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.406230   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.406383   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.406428   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.406571   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.406698   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.406825   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.406847   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.407455   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.407600   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.409278   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.411006   66755 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 04:28:40.622426   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:42.623400   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:43.412106   66755 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:28:43.412124   66755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 04:28:43.412389   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.414167   66755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I1004 04:28:43.414796   66755 main.go:141] libmachine: () Calling .GetVersion
	I1004 04:28:43.415285   66755 main.go:141] libmachine: Using API Version  1
	I1004 04:28:43.415309   66755 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 04:28:43.415657   66755 main.go:141] libmachine: () Calling .GetMachineName
	I1004 04:28:43.415710   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.415911   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetState
	I1004 04:28:43.416195   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.416217   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.416440   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.416628   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.416759   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.416856   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.418235   66755 main.go:141] libmachine: (embed-certs-934812) Calling .DriverName
	I1004 04:28:43.418426   66755 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 04:28:43.418436   66755 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 04:28:43.418456   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHHostname
	I1004 04:28:43.421305   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.421761   66755 main.go:141] libmachine: (embed-certs-934812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:50", ip: ""} in network mk-embed-certs-934812: {Iface:virbr1 ExpiryTime:2024-10-04 05:23:27 +0000 UTC Type:0 Mac:52:54:00:25:fb:50 Iaid: IPaddr:192.168.61.74 Prefix:24 Hostname:embed-certs-934812 Clientid:01:52:54:00:25:fb:50}
	I1004 04:28:43.421779   66755 main.go:141] libmachine: (embed-certs-934812) DBG | domain embed-certs-934812 has defined IP address 192.168.61.74 and MAC address 52:54:00:25:fb:50 in network mk-embed-certs-934812
	I1004 04:28:43.421966   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHPort
	I1004 04:28:43.422654   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHKeyPath
	I1004 04:28:43.422789   66755 main.go:141] libmachine: (embed-certs-934812) Calling .GetSSHUsername
	I1004 04:28:43.422877   66755 sshutil.go:53] new ssh client: &{IP:192.168.61.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/embed-certs-934812/id_rsa Username:docker}
	I1004 04:28:43.580648   66755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 04:28:43.615728   66755 node_ready.go:35] waiting up to 6m0s for node "embed-certs-934812" to be "Ready" ...
	I1004 04:28:43.625558   66755 node_ready.go:49] node "embed-certs-934812" has status "Ready":"True"
	I1004 04:28:43.625600   66755 node_ready.go:38] duration metric: took 9.827384ms for node "embed-certs-934812" to be "Ready" ...
	I1004 04:28:43.625612   66755 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:43.634425   66755 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:43.748926   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 04:28:43.774727   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 04:28:43.781558   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 04:28:43.781589   66755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 04:28:43.838039   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 04:28:43.838067   66755 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 04:28:43.945364   66755 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:28:43.945392   66755 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 04:28:44.005000   66755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 04:28:44.253491   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.253521   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.253828   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.253896   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.253910   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.253925   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.253938   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.254130   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.254149   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.254164   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.267367   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.267396   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.267680   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.267700   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.864663   66755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.089890385s)
	I1004 04:28:44.864722   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.864734   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.865046   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.865070   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:44.865086   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:44.865095   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:44.866872   66755 main.go:141] libmachine: (embed-certs-934812) DBG | Closing plugin on server side
	I1004 04:28:44.866877   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:44.866907   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.138868   66755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133828074s)
	I1004 04:28:45.138926   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:45.138942   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:45.139243   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:45.139265   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.139276   66755 main.go:141] libmachine: Making call to close driver server
	I1004 04:28:45.139283   66755 main.go:141] libmachine: (embed-certs-934812) Calling .Close
	I1004 04:28:45.139484   66755 main.go:141] libmachine: Successfully made call to close driver server
	I1004 04:28:45.139497   66755 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 04:28:45.139507   66755 addons.go:475] Verifying addon metrics-server=true in "embed-certs-934812"
	I1004 04:28:45.141046   66755 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 04:28:44.147013   67541 pod_ready.go:103] pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:44.648117   67541 pod_ready.go:82] duration metric: took 4m0.007930603s for pod "metrics-server-6867b74b74-f6qhr" in "kube-system" namespace to be "Ready" ...
	E1004 04:28:44.648144   67541 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 04:28:44.648154   67541 pod_ready.go:39] duration metric: took 4m7.419382357s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:44.648170   67541 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:28:44.648200   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:44.648256   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:44.712473   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:44.712500   67541 cri.go:89] found id: ""
	I1004 04:28:44.712510   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:44.712568   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.717619   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:44.717688   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:44.760036   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:44.760061   67541 cri.go:89] found id: ""
	I1004 04:28:44.760071   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:44.760124   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.766402   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:44.766465   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:44.821766   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:44.821792   67541 cri.go:89] found id: ""
	I1004 04:28:44.821801   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:44.821858   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.826315   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:44.826370   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:44.873526   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:44.873547   67541 cri.go:89] found id: ""
	I1004 04:28:44.873556   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:44.873615   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.878375   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:44.878442   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:44.920240   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:44.920261   67541 cri.go:89] found id: ""
	I1004 04:28:44.920270   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:44.920322   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.925102   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:44.925158   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:44.967386   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:44.967406   67541 cri.go:89] found id: ""
	I1004 04:28:44.967416   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:44.967471   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:44.971979   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:44.972056   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:45.009842   67541 cri.go:89] found id: ""
	I1004 04:28:45.009869   67541 logs.go:282] 0 containers: []
	W1004 04:28:45.009881   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:45.009890   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:45.009952   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:45.055166   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:45.055189   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:45.055194   67541 cri.go:89] found id: ""
	I1004 04:28:45.055201   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:45.055258   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:45.060362   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:45.066118   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:45.066351   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:45.128185   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:45.128221   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:45.270042   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:45.270084   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:45.309065   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:45.309093   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:45.352299   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:45.352327   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:45.401846   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:45.401882   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:45.447474   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:45.447530   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:45.500734   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:45.500765   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:46.040224   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:46.040275   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:46.112675   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:46.112716   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:46.128530   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:46.128553   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:46.175007   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:46.175039   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:46.222706   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:46.222738   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:44.623804   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:47.122548   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:45.142166   66755 addons.go:510] duration metric: took 1.786788452s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 04:28:45.642731   66755 pod_ready.go:103] pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:46.641705   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.641730   66755 pod_ready.go:82] duration metric: took 3.007270041s for pod "coredns-7c65d6cfc9-h5tbr" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.641743   66755 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.646744   66755 pod_ready.go:93] pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.646767   66755 pod_ready.go:82] duration metric: took 5.01485ms for pod "coredns-7c65d6cfc9-p52s6" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.646777   66755 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.652554   66755 pod_ready.go:93] pod "etcd-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:46.652572   66755 pod_ready.go:82] duration metric: took 5.78883ms for pod "etcd-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:46.652580   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:48.659404   66755 pod_ready.go:103] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:51.158765   66755 pod_ready.go:93] pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.158787   66755 pod_ready.go:82] duration metric: took 4.506200726s for pod "kube-apiserver-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.158796   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.162949   66755 pod_ready.go:93] pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.162967   66755 pod_ready.go:82] duration metric: took 4.16468ms for pod "kube-controller-manager-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.162975   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9czbc" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.167309   66755 pod_ready.go:93] pod "kube-proxy-9czbc" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.167327   66755 pod_ready.go:82] duration metric: took 4.347415ms for pod "kube-proxy-9czbc" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.167334   66755 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.171048   66755 pod_ready.go:93] pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace has status "Ready":"True"
	I1004 04:28:51.171065   66755 pod_ready.go:82] duration metric: took 3.724785ms for pod "kube-scheduler-embed-certs-934812" in "kube-system" namespace to be "Ready" ...
	I1004 04:28:51.171071   66755 pod_ready.go:39] duration metric: took 7.545445402s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:28:51.171083   66755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:28:51.171126   66755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:51.186751   66755 api_server.go:72] duration metric: took 7.831380288s to wait for apiserver process to appear ...
	I1004 04:28:51.186782   66755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:28:51.186799   66755 api_server.go:253] Checking apiserver healthz at https://192.168.61.74:8443/healthz ...
	I1004 04:28:51.192753   66755 api_server.go:279] https://192.168.61.74:8443/healthz returned 200:
	ok
	I1004 04:28:51.194259   66755 api_server.go:141] control plane version: v1.31.1
	I1004 04:28:51.194284   66755 api_server.go:131] duration metric: took 7.491456ms to wait for apiserver health ...
	I1004 04:28:51.194292   66755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:28:51.241469   66755 system_pods.go:59] 9 kube-system pods found
	I1004 04:28:51.241491   66755 system_pods.go:61] "coredns-7c65d6cfc9-h5tbr" [87deb61f-2ce4-4d45-91da-c16557b5ef75] Running
	I1004 04:28:51.241496   66755 system_pods.go:61] "coredns-7c65d6cfc9-p52s6" [b9b3cd7f-f28e-4502-a55d-7792cfa5a6fe] Running
	I1004 04:28:51.241500   66755 system_pods.go:61] "etcd-embed-certs-934812" [487917e4-1b38-4b84-baf9-eacc1113718e] Running
	I1004 04:28:51.241503   66755 system_pods.go:61] "kube-apiserver-embed-certs-934812" [7fcfc483-3c53-415d-8329-5cae1ecd022f] Running
	I1004 04:28:51.241507   66755 system_pods.go:61] "kube-controller-manager-embed-certs-934812" [9ab25b16-916b-49f7-b34d-a12cf8552769] Running
	I1004 04:28:51.241514   66755 system_pods.go:61] "kube-proxy-9czbc" [dedff5a2-62b6-49c3-8369-9182d1c5bf7a] Running
	I1004 04:28:51.241517   66755 system_pods.go:61] "kube-scheduler-embed-certs-934812" [88a5acd5-108f-482a-ac24-7b75a30428ff] Running
	I1004 04:28:51.241525   66755 system_pods.go:61] "metrics-server-6867b74b74-fh2lk" [12e3e884-2ad3-4eaa-a505-822717e5bc8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:51.241528   66755 system_pods.go:61] "storage-provisioner" [67b4ef22-068c-4d14-840e-deab91c5ab94] Running
	I1004 04:28:51.241534   66755 system_pods.go:74] duration metric: took 47.237476ms to wait for pod list to return data ...
	I1004 04:28:51.241541   66755 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:28:51.438932   66755 default_sa.go:45] found service account: "default"
	I1004 04:28:51.438957   66755 default_sa.go:55] duration metric: took 197.410206ms for default service account to be created ...
	I1004 04:28:51.438966   66755 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:28:51.642064   66755 system_pods.go:86] 9 kube-system pods found
	I1004 04:28:51.642091   66755 system_pods.go:89] "coredns-7c65d6cfc9-h5tbr" [87deb61f-2ce4-4d45-91da-c16557b5ef75] Running
	I1004 04:28:51.642095   66755 system_pods.go:89] "coredns-7c65d6cfc9-p52s6" [b9b3cd7f-f28e-4502-a55d-7792cfa5a6fe] Running
	I1004 04:28:51.642100   66755 system_pods.go:89] "etcd-embed-certs-934812" [487917e4-1b38-4b84-baf9-eacc1113718e] Running
	I1004 04:28:51.642103   66755 system_pods.go:89] "kube-apiserver-embed-certs-934812" [7fcfc483-3c53-415d-8329-5cae1ecd022f] Running
	I1004 04:28:51.642107   66755 system_pods.go:89] "kube-controller-manager-embed-certs-934812" [9ab25b16-916b-49f7-b34d-a12cf8552769] Running
	I1004 04:28:51.642111   66755 system_pods.go:89] "kube-proxy-9czbc" [dedff5a2-62b6-49c3-8369-9182d1c5bf7a] Running
	I1004 04:28:51.642115   66755 system_pods.go:89] "kube-scheduler-embed-certs-934812" [88a5acd5-108f-482a-ac24-7b75a30428ff] Running
	I1004 04:28:51.642121   66755 system_pods.go:89] "metrics-server-6867b74b74-fh2lk" [12e3e884-2ad3-4eaa-a505-822717e5bc8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:51.642124   66755 system_pods.go:89] "storage-provisioner" [67b4ef22-068c-4d14-840e-deab91c5ab94] Running
	I1004 04:28:51.642133   66755 system_pods.go:126] duration metric: took 203.1616ms to wait for k8s-apps to be running ...
	I1004 04:28:51.642139   66755 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:28:51.642176   66755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:51.658916   66755 system_svc.go:56] duration metric: took 16.763146ms WaitForService to wait for kubelet
	I1004 04:28:51.658948   66755 kubeadm.go:582] duration metric: took 8.303579518s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:28:51.658964   66755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:28:51.839048   66755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:28:51.839067   66755 node_conditions.go:123] node cpu capacity is 2
	I1004 04:28:51.839076   66755 node_conditions.go:105] duration metric: took 180.108785ms to run NodePressure ...
	I1004 04:28:51.839086   66755 start.go:241] waiting for startup goroutines ...
	I1004 04:28:51.839093   66755 start.go:246] waiting for cluster config update ...
	I1004 04:28:51.839103   66755 start.go:255] writing updated cluster config ...
	I1004 04:28:51.839343   66755 ssh_runner.go:195] Run: rm -f paused
	I1004 04:28:51.887283   66755 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:28:51.889326   66755 out.go:177] * Done! kubectl is now configured to use "embed-certs-934812" cluster and "default" namespace by default
	I1004 04:28:48.765066   67541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:28:48.780955   67541 api_server.go:72] duration metric: took 4m18.802753607s to wait for apiserver process to appear ...
	I1004 04:28:48.780988   67541 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:28:48.781022   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:48.781074   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:48.817315   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:48.817337   67541 cri.go:89] found id: ""
	I1004 04:28:48.817346   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:48.817406   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.821619   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:48.821676   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:48.860019   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:48.860043   67541 cri.go:89] found id: ""
	I1004 04:28:48.860052   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:48.860101   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.864005   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:48.864065   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:48.901273   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:48.901295   67541 cri.go:89] found id: ""
	I1004 04:28:48.901303   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:48.901353   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.905950   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:48.906007   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:48.939708   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:48.939735   67541 cri.go:89] found id: ""
	I1004 04:28:48.939745   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:48.939812   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.943625   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:48.943692   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:48.979452   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:48.979481   67541 cri.go:89] found id: ""
	I1004 04:28:48.979490   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:48.979550   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:48.983629   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:48.983692   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:49.021137   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:49.021160   67541 cri.go:89] found id: ""
	I1004 04:28:49.021169   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:49.021242   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.025644   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:49.025712   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:49.062410   67541 cri.go:89] found id: ""
	I1004 04:28:49.062437   67541 logs.go:282] 0 containers: []
	W1004 04:28:49.062447   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:49.062452   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:49.062499   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:49.098959   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:49.098990   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:49.098996   67541 cri.go:89] found id: ""
	I1004 04:28:49.099005   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:49.099067   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.103474   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:49.107824   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:49.107852   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:49.228249   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:49.228278   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:49.269454   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:49.269479   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:49.305639   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:49.305666   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:49.770318   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:49.770348   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:49.808468   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:49.808493   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:49.884965   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:49.884997   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:49.901874   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:49.901898   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:49.952844   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:49.952869   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:49.986100   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:49.986141   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:50.023082   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:50.023108   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:50.074848   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:50.074876   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:50.112513   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:50.112541   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:52.658644   67541 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8444/healthz ...
	I1004 04:28:52.663076   67541 api_server.go:279] https://192.168.39.201:8444/healthz returned 200:
	ok
	I1004 04:28:52.663997   67541 api_server.go:141] control plane version: v1.31.1
	I1004 04:28:52.664017   67541 api_server.go:131] duration metric: took 3.8830221s to wait for apiserver health ...
	I1004 04:28:52.664024   67541 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:28:52.664045   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:28:52.664085   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:28:52.704174   67541 cri.go:89] found id: "8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:52.704193   67541 cri.go:89] found id: ""
	I1004 04:28:52.704200   67541 logs.go:282] 1 containers: [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500]
	I1004 04:28:52.704253   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.708388   67541 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:28:52.708438   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:28:52.743028   67541 cri.go:89] found id: "fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:52.743053   67541 cri.go:89] found id: ""
	I1004 04:28:52.743062   67541 logs.go:282] 1 containers: [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0]
	I1004 04:28:52.743108   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.747354   67541 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:28:52.747405   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:28:52.782350   67541 cri.go:89] found id: "7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:52.782373   67541 cri.go:89] found id: ""
	I1004 04:28:52.782382   67541 logs.go:282] 1 containers: [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0]
	I1004 04:28:52.782424   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.786336   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:28:52.786394   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:28:52.826929   67541 cri.go:89] found id: "59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:52.826950   67541 cri.go:89] found id: ""
	I1004 04:28:52.826958   67541 logs.go:282] 1 containers: [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3]
	I1004 04:28:52.827018   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.831039   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:28:52.831094   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:28:52.865963   67541 cri.go:89] found id: "387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:52.865984   67541 cri.go:89] found id: ""
	I1004 04:28:52.865992   67541 logs.go:282] 1 containers: [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754]
	I1004 04:28:52.866032   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.869982   67541 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:28:52.870024   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:28:52.919060   67541 cri.go:89] found id: "d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:52.919081   67541 cri.go:89] found id: ""
	I1004 04:28:52.919091   67541 logs.go:282] 1 containers: [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40]
	I1004 04:28:52.919139   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:52.923080   67541 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:28:52.923131   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:28:52.962615   67541 cri.go:89] found id: ""
	I1004 04:28:52.962636   67541 logs.go:282] 0 containers: []
	W1004 04:28:52.962643   67541 logs.go:284] No container was found matching "kindnet"
	I1004 04:28:52.962649   67541 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:28:52.962706   67541 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:28:52.999914   67541 cri.go:89] found id: "ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:52.999936   67541 cri.go:89] found id: "d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:52.999940   67541 cri.go:89] found id: ""
	I1004 04:28:52.999947   67541 logs.go:282] 2 containers: [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641]
	I1004 04:28:52.999998   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:53.003894   67541 ssh_runner.go:195] Run: which crictl
	I1004 04:28:53.007759   67541 logs.go:123] Gathering logs for dmesg ...
	I1004 04:28:53.007776   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:28:53.021269   67541 logs.go:123] Gathering logs for kube-apiserver [8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500] ...
	I1004 04:28:53.021289   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e5ab1b72e413aac6f58eb0c761afe40e3302066bb96962380dcaf9f9dbb8500"
	I1004 04:28:53.088683   67541 logs.go:123] Gathering logs for storage-provisioner [ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b] ...
	I1004 04:28:53.088711   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec898e33ba398ddd0c59c227f7d55bae7176d5729a9fa72f651a3172a4edf50b"
	I1004 04:28:53.127363   67541 logs.go:123] Gathering logs for storage-provisioner [d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641] ...
	I1004 04:28:53.127387   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2d04e275366abcc8b3130355bd348d2e8a18fe33ee7dfd0a0305baea4054641"
	I1004 04:28:53.163467   67541 logs.go:123] Gathering logs for container status ...
	I1004 04:28:53.163490   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:28:53.212683   67541 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:28:53.212717   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:28:49.123892   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:51.124121   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:53.124323   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:49.686881   67282 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:28:49.687234   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:28:49.687487   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:28:53.569320   67541 logs.go:123] Gathering logs for kubelet ...
	I1004 04:28:53.569360   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:28:53.644197   67541 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:28:53.644231   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:28:53.747465   67541 logs.go:123] Gathering logs for etcd [fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0] ...
	I1004 04:28:53.747497   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe3375782091ced79d34c6e67211330bfa9aaf58a59b4016b5a561abd54a84d0"
	I1004 04:28:53.788761   67541 logs.go:123] Gathering logs for coredns [7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0] ...
	I1004 04:28:53.788798   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6d3555bccddb74e771e1551d4b0faa55afeed69ee72c312dd580e9573790d0"
	I1004 04:28:53.822705   67541 logs.go:123] Gathering logs for kube-scheduler [59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3] ...
	I1004 04:28:53.822737   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59f9dd635170a1db37490a6850f7cc6f32de003edb8f9ab0109c110b8da5c6b3"
	I1004 04:28:53.857525   67541 logs.go:123] Gathering logs for kube-proxy [387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754] ...
	I1004 04:28:53.857548   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 387473e4357dc81d208bbf299ff9c677b464db2dc45e7f03eaaef86a39539754"
	I1004 04:28:53.894880   67541 logs.go:123] Gathering logs for kube-controller-manager [d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40] ...
	I1004 04:28:53.894904   67541 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d889ba1109ff24127b074169664e2fc28ea9ec85405331941f6677accc194d40"
	I1004 04:28:56.455254   67541 system_pods.go:59] 8 kube-system pods found
	I1004 04:28:56.455286   67541 system_pods.go:61] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running
	I1004 04:28:56.455293   67541 system_pods.go:61] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running
	I1004 04:28:56.455299   67541 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running
	I1004 04:28:56.455304   67541 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running
	I1004 04:28:56.455309   67541 system_pods.go:61] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:28:56.455314   67541 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running
	I1004 04:28:56.455322   67541 system_pods.go:61] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:56.455329   67541 system_pods.go:61] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:28:56.455338   67541 system_pods.go:74] duration metric: took 3.791308758s to wait for pod list to return data ...
	I1004 04:28:56.455347   67541 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:28:56.457799   67541 default_sa.go:45] found service account: "default"
	I1004 04:28:56.457817   67541 default_sa.go:55] duration metric: took 2.463452ms for default service account to be created ...
	I1004 04:28:56.457825   67541 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:28:56.462569   67541 system_pods.go:86] 8 kube-system pods found
	I1004 04:28:56.462593   67541 system_pods.go:89] "coredns-7c65d6cfc9-wz6rd" [6936a096-4173-4f58-aa65-001ea438e3a4] Running
	I1004 04:28:56.462601   67541 system_pods.go:89] "etcd-default-k8s-diff-port-281471" [2fb0d649-0b9f-4ed1-95e4-f7050b9c974d] Running
	I1004 04:28:56.462608   67541 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-281471" [869d3e22-9f5b-4fa9-9164-a4916b3f2b20] Running
	I1004 04:28:56.462615   67541 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-281471" [1df1684e-26f3-492c-b8e0-2953fc268e84] Running
	I1004 04:28:56.462620   67541 system_pods.go:89] "kube-proxy-4nnld" [3e045721-1f51-44cd-afc7-acf8e4ce6845] Running
	I1004 04:28:56.462626   67541 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-281471" [c865bc05-811d-4f1a-bf28-a5344dff06bb] Running
	I1004 04:28:56.462632   67541 system_pods.go:89] "metrics-server-6867b74b74-f6qhr" [46c2870a-41a6-46a1-bbbd-f38f2e266873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:28:56.462637   67541 system_pods.go:89] "storage-provisioner" [b644e87c-505e-44c0-b0a0-e07df97f5f51] Running
	I1004 04:28:56.462645   67541 system_pods.go:126] duration metric: took 4.814032ms to wait for k8s-apps to be running ...
	I1004 04:28:56.462657   67541 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:28:56.462749   67541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:28:56.478944   67541 system_svc.go:56] duration metric: took 16.282384ms WaitForService to wait for kubelet
	I1004 04:28:56.478966   67541 kubeadm.go:582] duration metric: took 4m26.500769346s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:28:56.478982   67541 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:28:56.481946   67541 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:28:56.481968   67541 node_conditions.go:123] node cpu capacity is 2
	I1004 04:28:56.481980   67541 node_conditions.go:105] duration metric: took 2.992423ms to run NodePressure ...
	I1004 04:28:56.481993   67541 start.go:241] waiting for startup goroutines ...
	I1004 04:28:56.482006   67541 start.go:246] waiting for cluster config update ...
	I1004 04:28:56.482018   67541 start.go:255] writing updated cluster config ...
	I1004 04:28:56.482450   67541 ssh_runner.go:195] Run: rm -f paused
	I1004 04:28:56.528299   67541 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:28:56.530289   67541 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-281471" cluster and "default" namespace by default
	I1004 04:28:55.625569   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:58.122544   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:28:54.687773   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:28:54.688026   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:29:00.124374   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:02.624622   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:05.123726   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:07.622036   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:04.688599   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:29:04.688808   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:29:09.623060   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:11.623590   66293 pod_ready.go:103] pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace has status "Ready":"False"
	I1004 04:29:12.123919   66293 pod_ready.go:82] duration metric: took 4m0.007496621s for pod "metrics-server-6867b74b74-zsf86" in "kube-system" namespace to be "Ready" ...
	E1004 04:29:12.123939   66293 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 04:29:12.123946   66293 pod_ready.go:39] duration metric: took 4m3.607239118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 04:29:12.123960   66293 api_server.go:52] waiting for apiserver process to appear ...
	I1004 04:29:12.123985   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:12.124023   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:12.174748   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:12.174767   66293 cri.go:89] found id: ""
	I1004 04:29:12.174775   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:12.174823   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.179374   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:12.179436   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:12.219617   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:12.219637   66293 cri.go:89] found id: ""
	I1004 04:29:12.219646   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:12.219699   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.223774   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:12.223844   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:12.261339   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:12.261360   66293 cri.go:89] found id: ""
	I1004 04:29:12.261369   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:12.261424   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.265364   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:12.265414   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:12.313178   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:12.313197   66293 cri.go:89] found id: ""
	I1004 04:29:12.313206   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:12.313271   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.317440   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:12.317498   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:12.353037   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:12.353054   66293 cri.go:89] found id: ""
	I1004 04:29:12.353072   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:12.353125   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.357212   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:12.357272   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:12.392082   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:12.392106   66293 cri.go:89] found id: ""
	I1004 04:29:12.392115   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:12.392167   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.396333   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:12.396395   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:12.439298   66293 cri.go:89] found id: ""
	I1004 04:29:12.439329   66293 logs.go:282] 0 containers: []
	W1004 04:29:12.439337   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:12.439343   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:12.439387   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:12.478798   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:12.478814   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:12.478818   66293 cri.go:89] found id: ""
	I1004 04:29:12.478824   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:12.478866   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.483035   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:12.486977   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:12.486992   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:12.520849   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:12.520875   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:13.072628   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:13.072671   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:13.137973   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:13.138000   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:13.259585   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:13.259611   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:13.312315   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:13.312340   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:13.352351   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:13.352377   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:13.391319   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:13.391352   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:13.430681   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:13.430712   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:13.464929   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:13.464957   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:13.505312   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:13.505340   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:13.520476   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:13.520517   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:13.582723   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:13.582752   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:16.131437   66293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 04:29:16.150426   66293 api_server.go:72] duration metric: took 4m14.921074088s to wait for apiserver process to appear ...
	I1004 04:29:16.150457   66293 api_server.go:88] waiting for apiserver healthz status ...
	I1004 04:29:16.150498   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:16.150559   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:16.197236   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:16.197265   66293 cri.go:89] found id: ""
	I1004 04:29:16.197275   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:16.197341   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.202103   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:16.202187   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:16.236881   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:16.236907   66293 cri.go:89] found id: ""
	I1004 04:29:16.236916   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:16.236976   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.241220   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:16.241289   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:16.275727   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:16.275750   66293 cri.go:89] found id: ""
	I1004 04:29:16.275759   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:16.275828   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.280282   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:16.280352   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:16.320297   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:16.320323   66293 cri.go:89] found id: ""
	I1004 04:29:16.320332   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:16.320386   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.324982   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:16.325038   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:16.367062   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:16.367081   66293 cri.go:89] found id: ""
	I1004 04:29:16.367089   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:16.367143   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.371124   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:16.371182   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:16.405706   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:16.405728   66293 cri.go:89] found id: ""
	I1004 04:29:16.405738   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:16.405785   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.410027   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:16.410084   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:16.444937   66293 cri.go:89] found id: ""
	I1004 04:29:16.444961   66293 logs.go:282] 0 containers: []
	W1004 04:29:16.444971   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:16.444978   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:16.445032   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:16.480123   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:16.480153   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:16.480160   66293 cri.go:89] found id: ""
	I1004 04:29:16.480168   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:16.480228   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.484216   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:16.488156   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:16.488177   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:16.501573   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:16.501591   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:16.600789   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:16.600814   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:16.641604   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:16.641634   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:16.696735   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:16.696764   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:16.737153   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:16.737177   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:17.188490   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:17.188546   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:17.262072   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:17.262108   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:17.310881   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:17.310911   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:17.356105   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:17.356135   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:17.398916   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:17.398948   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:17.440122   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:17.440149   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:17.482529   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:17.482553   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:20.034163   66293 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1004 04:29:20.039165   66293 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I1004 04:29:20.040105   66293 api_server.go:141] control plane version: v1.31.1
	I1004 04:29:20.040124   66293 api_server.go:131] duration metric: took 3.889660333s to wait for apiserver health ...
	I1004 04:29:20.040131   66293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 04:29:20.040156   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:29:20.040203   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:29:20.078208   66293 cri.go:89] found id: "1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:20.078234   66293 cri.go:89] found id: ""
	I1004 04:29:20.078244   66293 logs.go:282] 1 containers: [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09]
	I1004 04:29:20.078306   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.082751   66293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:29:20.082808   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:29:20.128002   66293 cri.go:89] found id: "def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:20.128024   66293 cri.go:89] found id: ""
	I1004 04:29:20.128034   66293 logs.go:282] 1 containers: [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38]
	I1004 04:29:20.128084   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.132039   66293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:29:20.132097   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:29:20.171887   66293 cri.go:89] found id: "8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:20.171911   66293 cri.go:89] found id: ""
	I1004 04:29:20.171921   66293 logs.go:282] 1 containers: [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e]
	I1004 04:29:20.171978   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.176095   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:29:20.176150   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:29:20.215155   66293 cri.go:89] found id: "bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:20.215175   66293 cri.go:89] found id: ""
	I1004 04:29:20.215183   66293 logs.go:282] 1 containers: [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e]
	I1004 04:29:20.215241   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.219738   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:29:20.219814   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:29:20.256116   66293 cri.go:89] found id: "d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:20.256134   66293 cri.go:89] found id: ""
	I1004 04:29:20.256142   66293 logs.go:282] 1 containers: [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab]
	I1004 04:29:20.256194   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.261201   66293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:29:20.261281   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:29:20.302328   66293 cri.go:89] found id: "1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:20.302350   66293 cri.go:89] found id: ""
	I1004 04:29:20.302359   66293 logs.go:282] 1 containers: [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6]
	I1004 04:29:20.302414   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.306488   66293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:29:20.306551   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:29:20.341266   66293 cri.go:89] found id: ""
	I1004 04:29:20.341290   66293 logs.go:282] 0 containers: []
	W1004 04:29:20.341300   66293 logs.go:284] No container was found matching "kindnet"
	I1004 04:29:20.341307   66293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 04:29:20.341361   66293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 04:29:20.379560   66293 cri.go:89] found id: "5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:20.379584   66293 cri.go:89] found id: "e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:20.379589   66293 cri.go:89] found id: ""
	I1004 04:29:20.379598   66293 logs.go:282] 2 containers: [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28]
	I1004 04:29:20.379653   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.383816   66293 ssh_runner.go:195] Run: which crictl
	I1004 04:29:20.388118   66293 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:29:20.388137   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 04:29:20.487661   66293 logs.go:123] Gathering logs for kube-apiserver [1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09] ...
	I1004 04:29:20.487686   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d381a201b984efc0487a81a89fc2a3c25151c31c238b26e32fd2e30b8c26f09"
	I1004 04:29:20.539728   66293 logs.go:123] Gathering logs for storage-provisioner [5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681] ...
	I1004 04:29:20.539754   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5451845c1793f24e453ea0f47190666a3b38f4bc098f566709c956550f797681"
	I1004 04:29:20.577435   66293 logs.go:123] Gathering logs for storage-provisioner [e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28] ...
	I1004 04:29:20.577463   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1cf4915ff1e55546edf93df7c9ae75ea5a8d7d0e665d62ac60df1a597ceec28"
	I1004 04:29:20.616450   66293 logs.go:123] Gathering logs for container status ...
	I1004 04:29:20.616480   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 04:29:20.658292   66293 logs.go:123] Gathering logs for kubelet ...
	I1004 04:29:20.658316   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:29:20.733483   66293 logs.go:123] Gathering logs for dmesg ...
	I1004 04:29:20.733515   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:29:20.749004   66293 logs.go:123] Gathering logs for etcd [def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38] ...
	I1004 04:29:20.749033   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def980019915cb54314aac5bb340d29c1040a3212debdb84782bd3019d29ac38"
	I1004 04:29:20.799355   66293 logs.go:123] Gathering logs for coredns [8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e] ...
	I1004 04:29:20.799383   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f0f82fef0d93e1c5524177f5a1cdc6bb84d4be3450b81e4c0cff598431a747e"
	I1004 04:29:20.839676   66293 logs.go:123] Gathering logs for kube-scheduler [bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e] ...
	I1004 04:29:20.839699   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0fa97b8409feb716427b0b3c30607683fb502a88609ce5c75ab9a55ebddf3e"
	I1004 04:29:20.874870   66293 logs.go:123] Gathering logs for kube-proxy [d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab] ...
	I1004 04:29:20.874896   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a50dddda4abf01941d1f7473beab9d94e11ff3e934e776e6204cc84c8a1aab"
	I1004 04:29:20.912635   66293 logs.go:123] Gathering logs for kube-controller-manager [1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6] ...
	I1004 04:29:20.912658   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f1e00105cb782c60c579257fd987e7586f22b0adb235b96a43806d3e5b499f6"
	I1004 04:29:20.968377   66293 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:29:20.968405   66293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:29:23.820462   66293 system_pods.go:59] 8 kube-system pods found
	I1004 04:29:23.820491   66293 system_pods.go:61] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running
	I1004 04:29:23.820497   66293 system_pods.go:61] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running
	I1004 04:29:23.820501   66293 system_pods.go:61] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running
	I1004 04:29:23.820506   66293 system_pods.go:61] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running
	I1004 04:29:23.820514   66293 system_pods.go:61] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running
	I1004 04:29:23.820517   66293 system_pods.go:61] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running
	I1004 04:29:23.820524   66293 system_pods.go:61] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:29:23.820529   66293 system_pods.go:61] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running
	I1004 04:29:23.820537   66293 system_pods.go:74] duration metric: took 3.780400092s to wait for pod list to return data ...
	I1004 04:29:23.820544   66293 default_sa.go:34] waiting for default service account to be created ...
	I1004 04:29:23.823119   66293 default_sa.go:45] found service account: "default"
	I1004 04:29:23.823137   66293 default_sa.go:55] duration metric: took 2.58707ms for default service account to be created ...
	I1004 04:29:23.823144   66293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 04:29:23.827365   66293 system_pods.go:86] 8 kube-system pods found
	I1004 04:29:23.827385   66293 system_pods.go:89] "coredns-7c65d6cfc9-ppggj" [6a5d64c0-542f-4972-b038-e675495a22b7] Running
	I1004 04:29:23.827389   66293 system_pods.go:89] "etcd-no-preload-658545" [ec477e1b-a078-4b44-aad1-85079576ab60] Running
	I1004 04:29:23.827393   66293 system_pods.go:89] "kube-apiserver-no-preload-658545" [8d5e6fde-c87f-43e5-b9cd-3a94cd7ca822] Running
	I1004 04:29:23.827397   66293 system_pods.go:89] "kube-controller-manager-no-preload-658545" [fe758203-2931-495a-a516-c45a8a331b4b] Running
	I1004 04:29:23.827400   66293 system_pods.go:89] "kube-proxy-dvr6b" [365b5c79-3995-4de5-aeb2-da465aeb66dd] Running
	I1004 04:29:23.827405   66293 system_pods.go:89] "kube-scheduler-no-preload-658545" [14c88a57-7373-439a-ad37-53ade08e9720] Running
	I1004 04:29:23.827410   66293 system_pods.go:89] "metrics-server-6867b74b74-zsf86" [434282d8-7a99-4a76-b5c3-a880cf78ec35] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 04:29:23.827415   66293 system_pods.go:89] "storage-provisioner" [28bf1888-f061-44ad-9c2b-0f2db0ade47f] Running
	I1004 04:29:23.827422   66293 system_pods.go:126] duration metric: took 4.27475ms to wait for k8s-apps to be running ...
	I1004 04:29:23.827428   66293 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 04:29:23.827468   66293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:29:23.844696   66293 system_svc.go:56] duration metric: took 17.261418ms WaitForService to wait for kubelet
	I1004 04:29:23.844724   66293 kubeadm.go:582] duration metric: took 4m22.61537826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 04:29:23.844746   66293 node_conditions.go:102] verifying NodePressure condition ...
	I1004 04:29:23.847873   66293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1004 04:29:23.847892   66293 node_conditions.go:123] node cpu capacity is 2
	I1004 04:29:23.847902   66293 node_conditions.go:105] duration metric: took 3.149916ms to run NodePressure ...
	I1004 04:29:23.847915   66293 start.go:241] waiting for startup goroutines ...
	I1004 04:29:23.847923   66293 start.go:246] waiting for cluster config update ...
	I1004 04:29:23.847932   66293 start.go:255] writing updated cluster config ...
	I1004 04:29:23.848202   66293 ssh_runner.go:195] Run: rm -f paused
	I1004 04:29:23.894092   66293 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 04:29:23.895736   66293 out.go:177] * Done! kubectl is now configured to use "no-preload-658545" cluster and "default" namespace by default
	I1004 04:29:24.690241   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:29:24.690419   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:04.692816   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:04.693091   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:04.693114   67282 kubeadm.go:310] 
	I1004 04:30:04.693149   67282 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:30:04.693214   67282 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:30:04.693236   67282 kubeadm.go:310] 
	I1004 04:30:04.693295   67282 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:30:04.693327   67282 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:30:04.693451   67282 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:30:04.693460   67282 kubeadm.go:310] 
	I1004 04:30:04.693568   67282 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:30:04.693614   67282 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:30:04.693668   67282 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:30:04.693688   67282 kubeadm.go:310] 
	I1004 04:30:04.693843   67282 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:30:04.693966   67282 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:30:04.693982   67282 kubeadm.go:310] 
	I1004 04:30:04.694097   67282 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:30:04.694218   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:30:04.694305   67282 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:30:04.694387   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:30:04.694399   67282 kubeadm.go:310] 
	I1004 04:30:04.695379   67282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:30:04.695478   67282 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:30:04.695566   67282 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1004 04:30:04.695695   67282 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1004 04:30:04.695742   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 04:30:05.153635   67282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 04:30:05.170057   67282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 04:30:05.179541   67282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 04:30:05.179563   67282 kubeadm.go:157] found existing configuration files:
	
	I1004 04:30:05.179611   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 04:30:05.188969   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 04:30:05.189025   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 04:30:05.198049   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 04:30:05.207031   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 04:30:05.207118   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 04:30:05.216934   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 04:30:05.226477   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 04:30:05.226541   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 04:30:05.236222   67282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 04:30:05.245314   67282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 04:30:05.245374   67282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 04:30:05.255762   67282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 04:30:05.329816   67282 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1004 04:30:05.329953   67282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 04:30:05.482342   67282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 04:30:05.482549   67282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 04:30:05.482692   67282 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 04:30:05.666400   67282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 04:30:05.668115   67282 out.go:235]   - Generating certificates and keys ...
	I1004 04:30:05.668217   67282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 04:30:05.668319   67282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 04:30:05.668460   67282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 04:30:05.668562   67282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1004 04:30:05.668660   67282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 04:30:05.668734   67282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1004 04:30:05.668823   67282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1004 04:30:05.668905   67282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1004 04:30:05.669010   67282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 04:30:05.669130   67282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 04:30:05.669186   67282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1004 04:30:05.669269   67282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 04:30:05.773446   67282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 04:30:05.823736   67282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 04:30:05.951294   67282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 04:30:06.250340   67282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 04:30:06.275797   67282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 04:30:06.276877   67282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 04:30:06.276944   67282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 04:30:06.437286   67282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 04:30:06.438849   67282 out.go:235]   - Booting up control plane ...
	I1004 04:30:06.438952   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 04:30:06.443688   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 04:30:06.444596   67282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 04:30:06.445267   67282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 04:30:06.457334   67282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 04:30:46.456706   67282 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1004 04:30:46.456854   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:46.457117   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:30:51.456986   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:30:51.457240   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:31:01.457062   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:31:01.457288   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:31:21.456976   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:31:21.457277   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:32:01.456978   67282 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1004 04:32:01.457225   67282 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1004 04:32:01.457249   67282 kubeadm.go:310] 
	I1004 04:32:01.457312   67282 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1004 04:32:01.457374   67282 kubeadm.go:310] 		timed out waiting for the condition
	I1004 04:32:01.457383   67282 kubeadm.go:310] 
	I1004 04:32:01.457434   67282 kubeadm.go:310] 	This error is likely caused by:
	I1004 04:32:01.457512   67282 kubeadm.go:310] 		- The kubelet is not running
	I1004 04:32:01.457678   67282 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1004 04:32:01.457692   67282 kubeadm.go:310] 
	I1004 04:32:01.457838   67282 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1004 04:32:01.457892   67282 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1004 04:32:01.457946   67282 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1004 04:32:01.457957   67282 kubeadm.go:310] 
	I1004 04:32:01.458102   67282 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1004 04:32:01.458217   67282 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1004 04:32:01.458233   67282 kubeadm.go:310] 
	I1004 04:32:01.458379   67282 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1004 04:32:01.458494   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1004 04:32:01.458604   67282 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1004 04:32:01.458699   67282 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1004 04:32:01.458710   67282 kubeadm.go:310] 
	I1004 04:32:01.459157   67282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 04:32:01.459272   67282 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1004 04:32:01.459386   67282 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1004 04:32:01.459464   67282 kubeadm.go:394] duration metric: took 7m57.553695137s to StartCluster
	I1004 04:32:01.459522   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 04:32:01.459586   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 04:32:01.500997   67282 cri.go:89] found id: ""
	I1004 04:32:01.501026   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.501037   67282 logs.go:284] No container was found matching "kube-apiserver"
	I1004 04:32:01.501044   67282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 04:32:01.501102   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 04:32:01.537240   67282 cri.go:89] found id: ""
	I1004 04:32:01.537276   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.537288   67282 logs.go:284] No container was found matching "etcd"
	I1004 04:32:01.537295   67282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 04:32:01.537349   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 04:32:01.573959   67282 cri.go:89] found id: ""
	I1004 04:32:01.573995   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.574007   67282 logs.go:284] No container was found matching "coredns"
	I1004 04:32:01.574013   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 04:32:01.574074   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 04:32:01.610614   67282 cri.go:89] found id: ""
	I1004 04:32:01.610645   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.610657   67282 logs.go:284] No container was found matching "kube-scheduler"
	I1004 04:32:01.610665   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 04:32:01.610716   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 04:32:01.645520   67282 cri.go:89] found id: ""
	I1004 04:32:01.645554   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.645567   67282 logs.go:284] No container was found matching "kube-proxy"
	I1004 04:32:01.645574   67282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 04:32:01.645640   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 04:32:01.679787   67282 cri.go:89] found id: ""
	I1004 04:32:01.679814   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.679823   67282 logs.go:284] No container was found matching "kube-controller-manager"
	I1004 04:32:01.679828   67282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 04:32:01.679873   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 04:32:01.714860   67282 cri.go:89] found id: ""
	I1004 04:32:01.714883   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.714891   67282 logs.go:284] No container was found matching "kindnet"
	I1004 04:32:01.714897   67282 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 04:32:01.714952   67282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 04:32:01.761170   67282 cri.go:89] found id: ""
	I1004 04:32:01.761198   67282 logs.go:282] 0 containers: []
	W1004 04:32:01.761208   67282 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1004 04:32:01.761220   67282 logs.go:123] Gathering logs for kubelet ...
	I1004 04:32:01.761232   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 04:32:01.822966   67282 logs.go:123] Gathering logs for dmesg ...
	I1004 04:32:01.823006   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 04:32:01.839482   67282 logs.go:123] Gathering logs for describe nodes ...
	I1004 04:32:01.839510   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1004 04:32:01.917863   67282 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1004 04:32:01.917887   67282 logs.go:123] Gathering logs for CRI-O ...
	I1004 04:32:01.917901   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 04:32:02.027216   67282 logs.go:123] Gathering logs for container status ...
	I1004 04:32:02.027247   67282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1004 04:32:02.069804   67282 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1004 04:32:02.069852   67282 out.go:270] * 
	W1004 04:32:02.069922   67282 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:32:02.069939   67282 out.go:270] * 
	W1004 04:32:02.070740   67282 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 04:32:02.074308   67282 out.go:201] 
	W1004 04:32:02.075387   67282 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1004 04:32:02.075427   67282 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1004 04:32:02.075458   67282 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1004 04:32:02.076675   67282 out.go:201] 
	
	
	==> CRI-O <==
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.759151671Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017045759124081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19f10bc8-1826-4ee0-9bc4-03dd0b3dff3d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.759606634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c26dffa1-60f7-408c-9fdb-f918e3a57ad0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.759675384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c26dffa1-60f7-408c-9fdb-f918e3a57ad0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.759740079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c26dffa1-60f7-408c-9fdb-f918e3a57ad0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.793239310Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=defe3770-f057-42d7-9375-a63f387190d0 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.793330727Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=defe3770-f057-42d7-9375-a63f387190d0 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.794140158Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=428e0e90-1cef-4176-8927-7a935a7157fc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.794515547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017045794497282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=428e0e90-1cef-4176-8927-7a935a7157fc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.795005788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dea359b1-2a55-4930-8bed-6692c80bbbd1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.795059402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dea359b1-2a55-4930-8bed-6692c80bbbd1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.795090598Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dea359b1-2a55-4930-8bed-6692c80bbbd1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.826932658Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24f5845a-a0f9-45eb-abed-20303d853d72 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.827062247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24f5845a-a0f9-45eb-abed-20303d853d72 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.828625586Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad75d06d-3d1a-4f1e-bf0f-9e772cb6fca9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.829076889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017045829049811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad75d06d-3d1a-4f1e-bf0f-9e772cb6fca9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.829587411Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55b50e41-16b2-40eb-acf5-a0d37641d389 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.829637578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55b50e41-16b2-40eb-acf5-a0d37641d389 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.829667585Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=55b50e41-16b2-40eb-acf5-a0d37641d389 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.864886096Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c3f0da7-a243-4963-8a19-0dc0458ce082 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.865025924Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c3f0da7-a243-4963-8a19-0dc0458ce082 name=/runtime.v1.RuntimeService/Version
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.865906217Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=444e74f7-5aa2-48de-b26b-5959192709c3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.866343808Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728017045866322637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=444e74f7-5aa2-48de-b26b-5959192709c3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.866762946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfb2df3a-19be-41ad-b457-eed47d0d68fd name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.866805668Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfb2df3a-19be-41ad-b457-eed47d0d68fd name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 04:44:05 old-k8s-version-420062 crio[636]: time="2024-10-04 04:44:05.866838685Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dfb2df3a-19be-41ad-b457-eed47d0d68fd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 4 04:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057605] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040409] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.074027] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.556132] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.574130] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.887139] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.071312] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072511] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.216496] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.132348] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.289222] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[Oct 4 04:24] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.060637] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.786232] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[ +11.909104] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 4 04:28] systemd-fstab-generator[5073]: Ignoring "noauto" option for root device
	[Oct 4 04:30] systemd-fstab-generator[5352]: Ignoring "noauto" option for root device
	[  +0.068575] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 04:44:06 up 20 min,  0 users,  load average: 0.04, 0.05, 0.02
	Linux old-k8s-version-420062 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1(0xc0001e33e0, 0xc000c52460, 0xc000c6f950, 0xc00033d950, 0xc0005421a0, 0xc00033d9c0, 0xc00083d380)
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:302 +0x1a5
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:268 +0x295
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]: goroutine 140 [select]:
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]: net.(*Resolver).lookupIPAddr(0x70c5740, 0x4f7fe40, 0xc0001e35c0, 0x48ab5d6, 0x3, 0xc0002821e0, 0x1f, 0x20fb, 0x0, 0x0, ...)
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]:         /usr/local/go/src/net/lookup.go:299 +0x685
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc0001e35c0, 0x48ab5d6, 0x3, 0xc0002821e0, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0001e35c0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0002821e0, 0x24, 0x0, ...)
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]: net.(*Dialer).DialContext(0xc000af01e0, 0x4f7fe00, 0xc000128018, 0x48ab5d6, 0x3, 0xc0002821e0, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b03480, 0x4f7fe00, 0xc000128018, 0x48ab5d6, 0x3, 0xc0002821e0, 0x24, 0x60, 0x7f2e2d0945b0, 0x118, ...)
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]: net/http.(*Transport).dial(0xc000870000, 0x4f7fe00, 0xc000128018, 0x48ab5d6, 0x3, 0xc0002821e0, 0x24, 0x0, 0x0, 0x4f0b860, ...)
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]: net/http.(*Transport).dialConn(0xc000870000, 0x4f7fe00, 0xc000128018, 0x0, 0xc000b9a300, 0x5, 0xc0002821e0, 0x24, 0x0, 0xc0002eab40, ...)
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]: net/http.(*Transport).dialConnFor(0xc000870000, 0xc0000a5ce0)
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]: created by net/http.(*Transport).queueForDial
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]: goroutine 141 [select]:
	Oct 04 04:44:06 old-k8s-version-420062 kubelet[6902]: net.cgoLookupIP(0x4f7fdc0, 0xc0003ed7c0, 0x48ab5d6, 0x3, 0xc0002821e0, 0x1f, 0x4170d40, 0xc000379310, 0xc0008a9ce0, 0x45756f, ...)
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-420062 -n old-k8s-version-420062
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-420062 -n old-k8s-version-420062: exit status 2 (230.755705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-420062" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (178.61s)

                                                
                                    

Test pass (198/267)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 27.04
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 12.67
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 63.66
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 206.01
31 TestAddons/serial/GCPAuth/Namespaces 0.15
34 TestAddons/parallel/Registry 17.7
36 TestAddons/parallel/InspektorGadget 11.88
37 TestAddons/parallel/Logviewer 6.68
40 TestAddons/parallel/CSI 68.51
41 TestAddons/parallel/Headlamp 19.86
42 TestAddons/parallel/CloudSpanner 5.56
43 TestAddons/parallel/LocalPath 57.64
44 TestAddons/parallel/NvidiaDevicePlugin 6.61
45 TestAddons/parallel/Yakd 11.28
47 TestCertOptions 75.08
48 TestCertExpiration 321.16
50 TestForceSystemdFlag 118.58
51 TestForceSystemdEnv 99.65
53 TestKVMDriverInstallOrUpdate 4.54
57 TestErrorSpam/setup 41.14
58 TestErrorSpam/start 0.34
59 TestErrorSpam/status 0.75
60 TestErrorSpam/pause 1.61
61 TestErrorSpam/unpause 1.86
62 TestErrorSpam/stop 5.72
65 TestFunctional/serial/CopySyncFile 0
66 TestFunctional/serial/StartWithProxy 84.34
67 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/SoftStart 43.44
69 TestFunctional/serial/KubeContext 0.04
70 TestFunctional/serial/KubectlGetPods 0.07
73 TestFunctional/serial/CacheCmd/cache/add_remote 3.47
74 TestFunctional/serial/CacheCmd/cache/add_local 2.25
75 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
76 TestFunctional/serial/CacheCmd/cache/list 0.04
77 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
78 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
79 TestFunctional/serial/CacheCmd/cache/delete 0.09
80 TestFunctional/serial/MinikubeKubectlCmd 0.1
81 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
82 TestFunctional/serial/ExtraConfig 369.2
83 TestFunctional/serial/ComponentHealth 0.06
84 TestFunctional/serial/LogsCmd 1.37
85 TestFunctional/serial/LogsFileCmd 1.36
86 TestFunctional/serial/InvalidService 4.44
88 TestFunctional/parallel/ConfigCmd 0.29
89 TestFunctional/parallel/DashboardCmd 32.64
90 TestFunctional/parallel/DryRun 0.3
91 TestFunctional/parallel/InternationalLanguage 0.13
92 TestFunctional/parallel/StatusCmd 1.12
96 TestFunctional/parallel/ServiceCmdConnect 10.57
97 TestFunctional/parallel/AddonsCmd 0.12
98 TestFunctional/parallel/PersistentVolumeClaim 49.62
100 TestFunctional/parallel/SSHCmd 0.38
101 TestFunctional/parallel/CpCmd 1.25
102 TestFunctional/parallel/MySQL 29.38
103 TestFunctional/parallel/FileSync 0.26
104 TestFunctional/parallel/CertSync 1.32
108 TestFunctional/parallel/NodeLabels 0.06
110 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
112 TestFunctional/parallel/License 0.67
122 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
124 TestFunctional/parallel/ProfileCmd/profile_list 0.34
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
126 TestFunctional/parallel/MountCmd/any-port 9.83
127 TestFunctional/parallel/MountCmd/specific-port 1.81
128 TestFunctional/parallel/ServiceCmd/List 0.36
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
131 TestFunctional/parallel/ServiceCmd/Format 0.58
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.75
133 TestFunctional/parallel/ServiceCmd/URL 0.53
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
137 TestFunctional/parallel/Version/short 0.05
138 TestFunctional/parallel/Version/components 0.58
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
143 TestFunctional/parallel/ImageCommands/ImageBuild 6.99
144 TestFunctional/parallel/ImageCommands/Setup 1.98
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.5
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.07
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.8
149 TestFunctional/parallel/ImageCommands/ImageRemove 1.87
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 7.94
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.01
158 TestMultiControlPlane/serial/StartCluster 197.87
159 TestMultiControlPlane/serial/DeployApp 7.61
160 TestMultiControlPlane/serial/PingHostFromPods 1.19
161 TestMultiControlPlane/serial/AddWorkerNode 59.16
162 TestMultiControlPlane/serial/NodeLabels 0.07
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
164 TestMultiControlPlane/serial/CopyFile 12.69
170 TestMultiControlPlane/serial/DeleteSecondaryNode 16.56
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.59
173 TestMultiControlPlane/serial/RestartCluster 346.31
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.61
175 TestMultiControlPlane/serial/AddSecondaryNode 77.45
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
180 TestJSONOutput/start/Command 82.41
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.67
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.61
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 7.36
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.18
208 TestMainNoArgs 0.04
209 TestMinikubeProfile 90.7
212 TestMountStart/serial/StartWithMountFirst 30.18
213 TestMountStart/serial/VerifyMountFirst 0.37
214 TestMountStart/serial/StartWithMountSecond 27.33
215 TestMountStart/serial/VerifyMountSecond 0.37
216 TestMountStart/serial/DeleteFirst 0.66
217 TestMountStart/serial/VerifyMountPostDelete 0.38
218 TestMountStart/serial/Stop 1.27
219 TestMountStart/serial/RestartStopped 20.79
220 TestMountStart/serial/VerifyMountPostStop 0.37
223 TestMultiNode/serial/FreshStart2Nodes 114.19
224 TestMultiNode/serial/DeployApp2Nodes 5.46
225 TestMultiNode/serial/PingHostFrom2Pods 0.79
226 TestMultiNode/serial/AddNode 50.41
227 TestMultiNode/serial/MultiNodeLabels 0.06
228 TestMultiNode/serial/ProfileList 0.55
229 TestMultiNode/serial/CopyFile 6.87
230 TestMultiNode/serial/StopNode 2.26
231 TestMultiNode/serial/StartAfterStop 39.24
233 TestMultiNode/serial/DeleteNode 2.01
235 TestMultiNode/serial/RestartMultiNode 182.48
236 TestMultiNode/serial/ValidateNameConflict 43.86
243 TestScheduledStopUnix 114.69
247 TestRunningBinaryUpgrade 143.98
270 TestPause/serial/Start 86.65
271 TestStoppedBinaryUpgrade/Setup 2.88
272 TestStoppedBinaryUpgrade/Upgrade 117.68
275 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
276 TestNoKubernetes/serial/StartWithK8s 47.47
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
278 TestNoKubernetes/serial/StartWithStopK8s 47.53
279 TestNoKubernetes/serial/Start 28.24
283 TestStartStop/group/no-preload/serial/FirstStart 103.66
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
285 TestNoKubernetes/serial/ProfileList 32.04
286 TestNoKubernetes/serial/Stop 1.42
289 TestStartStop/group/embed-certs/serial/FirstStart 80.15
290 TestStartStop/group/no-preload/serial/DeployApp 11.47
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
293 TestStartStop/group/newest-cni/serial/FirstStart 47.77
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
296 TestStartStop/group/embed-certs/serial/DeployApp 11.3
297 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
299 TestStartStop/group/newest-cni/serial/DeployApp 0
300 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
301 TestStartStop/group/newest-cni/serial/Stop 10.64
302 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
303 TestStartStop/group/newest-cni/serial/SecondStart 37.68
304 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
305 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
306 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
307 TestStartStop/group/newest-cni/serial/Pause 2.34
309 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 58.09
311 TestStartStop/group/no-preload/serial/SecondStart 645.78
312 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.28
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
318 TestStartStop/group/embed-certs/serial/SecondStart 583.06
319 TestStartStop/group/old-k8s-version/serial/Stop 3.31
320 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
323 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 453.55
354 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
355 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.99
x
+
TestDownloadOnly/v1.20.0/json-events (27.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-920812 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-920812 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (27.034916864s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (27.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1004 02:48:28.137162   16879 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1004 02:48:28.137319   16879 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-920812
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-920812: exit status 85 (55.250748ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-920812 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |          |
	|         | -p download-only-920812        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 02:48:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:48:01.139497   16891 out.go:345] Setting OutFile to fd 1 ...
	I1004 02:48:01.139752   16891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:48:01.139762   16891 out.go:358] Setting ErrFile to fd 2...
	I1004 02:48:01.139766   16891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:48:01.140344   16891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	W1004 02:48:01.140655   16891 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19546-9647/.minikube/config/config.json: open /home/jenkins/minikube-integration/19546-9647/.minikube/config/config.json: no such file or directory
	I1004 02:48:01.141460   16891 out.go:352] Setting JSON to true
	I1004 02:48:01.142379   16891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1826,"bootTime":1728008255,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 02:48:01.142480   16891 start.go:139] virtualization: kvm guest
	I1004 02:48:01.145154   16891 out.go:97] [download-only-920812] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1004 02:48:01.145257   16891 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball: no such file or directory
	I1004 02:48:01.145306   16891 notify.go:220] Checking for updates...
	I1004 02:48:01.146713   16891 out.go:169] MINIKUBE_LOCATION=19546
	I1004 02:48:01.148069   16891 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:48:01.149325   16891 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 02:48:01.150579   16891 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 02:48:01.151883   16891 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1004 02:48:01.154156   16891 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1004 02:48:01.154377   16891 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 02:48:01.257807   16891 out.go:97] Using the kvm2 driver based on user configuration
	I1004 02:48:01.257838   16891 start.go:297] selected driver: kvm2
	I1004 02:48:01.257844   16891 start.go:901] validating driver "kvm2" against <nil>
	I1004 02:48:01.258188   16891 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:48:01.258351   16891 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 02:48:01.274007   16891 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 02:48:01.274074   16891 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 02:48:01.274804   16891 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1004 02:48:01.275032   16891 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1004 02:48:01.275065   16891 cni.go:84] Creating CNI manager for ""
	I1004 02:48:01.275121   16891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:48:01.275131   16891 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 02:48:01.275213   16891 start.go:340] cluster config:
	{Name:download-only-920812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-920812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:48:01.275491   16891 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:48:01.277749   16891 out.go:97] Downloading VM boot image ...
	I1004 02:48:01.277809   16891 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19546-9647/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1004 02:48:12.906014   16891 out.go:97] Starting "download-only-920812" primary control-plane node in "download-only-920812" cluster
	I1004 02:48:12.906046   16891 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 02:48:13.021718   16891 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1004 02:48:13.021749   16891 cache.go:56] Caching tarball of preloaded images
	I1004 02:48:13.021954   16891 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1004 02:48:13.023939   16891 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1004 02:48:13.023961   16891 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1004 02:48:13.152294   16891 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1004 02:48:25.930606   16891 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1004 02:48:25.930700   16891 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-920812 host does not exist
	  To start a cluster, run: "minikube start -p download-only-920812"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-920812
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (12.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-583140 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-583140 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.665566676s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (12.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1004 02:48:41.121541   16879 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1004 02:48:41.121602   16879 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-583140
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-583140: exit status 85 (57.208894ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-920812 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | -p download-only-920812        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| delete  | -p download-only-920812        | download-only-920812 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| start   | -o=json --download-only        | download-only-583140 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | -p download-only-583140        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 02:48:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:48:28.493855   17163 out.go:345] Setting OutFile to fd 1 ...
	I1004 02:48:28.493993   17163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:48:28.494002   17163 out.go:358] Setting ErrFile to fd 2...
	I1004 02:48:28.494008   17163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:48:28.494185   17163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 02:48:28.494747   17163 out.go:352] Setting JSON to true
	I1004 02:48:28.495595   17163 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1853,"bootTime":1728008255,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 02:48:28.495689   17163 start.go:139] virtualization: kvm guest
	I1004 02:48:28.497936   17163 out.go:97] [download-only-583140] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 02:48:28.498070   17163 notify.go:220] Checking for updates...
	I1004 02:48:28.499485   17163 out.go:169] MINIKUBE_LOCATION=19546
	I1004 02:48:28.501014   17163 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:48:28.502430   17163 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 02:48:28.503837   17163 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 02:48:28.505043   17163 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1004 02:48:28.507266   17163 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1004 02:48:28.507529   17163 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 02:48:28.539958   17163 out.go:97] Using the kvm2 driver based on user configuration
	I1004 02:48:28.539982   17163 start.go:297] selected driver: kvm2
	I1004 02:48:28.539987   17163 start.go:901] validating driver "kvm2" against <nil>
	I1004 02:48:28.540289   17163 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:48:28.540375   17163 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19546-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 02:48:28.555606   17163 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1004 02:48:28.555654   17163 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 02:48:28.556183   17163 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1004 02:48:28.556327   17163 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1004 02:48:28.556352   17163 cni.go:84] Creating CNI manager for ""
	I1004 02:48:28.556392   17163 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:48:28.556405   17163 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 02:48:28.556458   17163 start.go:340] cluster config:
	{Name:download-only-583140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-583140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:48:28.556565   17163 iso.go:125] acquiring lock: {Name:mka909be892d8392f9339eae5480e293574a6d6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:48:28.558308   17163 out.go:97] Starting "download-only-583140" primary control-plane node in "download-only-583140" cluster
	I1004 02:48:28.558322   17163 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:48:28.709054   17163 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1004 02:48:28.709117   17163 cache.go:56] Caching tarball of preloaded images
	I1004 02:48:28.709303   17163 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1004 02:48:28.711370   17163 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1004 02:48:28.711383   17163 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1004 02:48:28.821941   17163 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19546-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-583140 host does not exist
	  To start a cluster, run: "minikube start -p download-only-583140"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-583140
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1004 02:48:41.677232   16879 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-774332 --alsologtostderr --binary-mirror http://127.0.0.1:34587 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-774332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-774332
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (63.66s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-329336 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-329336 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.574304948s)
helpers_test.go:175: Cleaning up "offline-crio-329336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-329336
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-329336: (1.083458912s)
--- PASS: TestOffline (63.66s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-335265
addons_test.go:945: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-335265: exit status 85 (48.604942ms)

                                                
                                                
-- stdout --
	* Profile "addons-335265" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-335265"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:956: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-335265
addons_test.go:956: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-335265: exit status 85 (48.907819ms)

                                                
                                                
-- stdout --
	* Profile "addons-335265" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-335265"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (206.01s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-335265 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=logviewer --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-335265 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=logviewer --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m26.00839067s)
--- PASS: TestAddons/Setup (206.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:570: (dbg) Run:  kubectl --context addons-335265 create ns new-namespace
addons_test.go:584: (dbg) Run:  kubectl --context addons-335265 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:322: registry stabilized in 3.778712ms
addons_test.go:324: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-nfhcd" [bf27c03f-b1e2-412d-a96b-4bb669dd6fd7] Running
addons_test.go:324: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005514261s
addons_test.go:327: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-csj4d" [b56921d1-efcc-463f-9f04-40fd7fde1775] Running
addons_test.go:327: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004691097s
addons_test.go:332: (dbg) Run:  kubectl --context addons-335265 delete po -l run=registry-test --now
addons_test.go:337: (dbg) Run:  kubectl --context addons-335265 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:337: (dbg) Done: kubectl --context addons-335265 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.531048078s)
addons_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 ip
addons_test.go:990: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:759: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-889l7" [4bac15bb-c75d-49da-9afb-87074914a0af] Running
addons_test.go:759: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0053058s
addons_test.go:990: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-amd64 -p addons-335265 addons disable inspektor-gadget --alsologtostderr -v=1: (5.869156108s)
--- PASS: TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                    
x
+
TestAddons/parallel/Logviewer (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Logviewer
=== PAUSE TestAddons/parallel/Logviewer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Logviewer
addons_test.go:769: (dbg) TestAddons/parallel/Logviewer: waiting 8m0s for pods matching "app=logviewer" in namespace "kube-system" ...
helpers_test.go:344: "logviewer-7c79c8bcc9-ddvsm" [eaf2b3b6-6d22-4038-8bdc-d56ceebb3cb6] Running
addons_test.go:769: (dbg) TestAddons/parallel/Logviewer: app=logviewer healthy within 6.004648652s
addons_test.go:990: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 addons disable logviewer --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Logviewer (6.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1004 03:00:20.086957   16879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:489: csi-hostpath-driver pods stabilized in 8.720932ms
addons_test.go:492: (dbg) Run:  kubectl --context addons-335265 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:497: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:502: (dbg) Run:  kubectl --context addons-335265 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:507: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f4e6b2fb-583b-4b6f-a6e8-f58abdad3f91] Pending
helpers_test.go:344: "task-pv-pod" [f4e6b2fb-583b-4b6f-a6e8-f58abdad3f91] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f4e6b2fb-583b-4b6f-a6e8-f58abdad3f91] Running
addons_test.go:507: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.004387467s
addons_test.go:512: (dbg) Run:  kubectl --context addons-335265 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:517: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-335265 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-335265 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:522: (dbg) Run:  kubectl --context addons-335265 delete pod task-pv-pod
addons_test.go:528: (dbg) Run:  kubectl --context addons-335265 delete pvc hpvc
addons_test.go:534: (dbg) Run:  kubectl --context addons-335265 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-335265 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:549: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2c65541e-45c5-4f91-a43a-809de05d2973] Pending
helpers_test.go:344: "task-pv-pod-restore" [2c65541e-45c5-4f91-a43a-809de05d2973] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2c65541e-45c5-4f91-a43a-809de05d2973] Running
addons_test.go:549: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005204362s
addons_test.go:554: (dbg) Run:  kubectl --context addons-335265 delete pod task-pv-pod-restore
addons_test.go:558: (dbg) Run:  kubectl --context addons-335265 delete pvc hpvc-restore
addons_test.go:562: (dbg) Run:  kubectl --context addons-335265 delete volumesnapshot new-snapshot-demo
addons_test.go:990: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:990: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-amd64 -p addons-335265 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.999204293s)
--- PASS: TestAddons/parallel/CSI (68.51s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:744: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-335265 --alsologtostderr -v=1
addons_test.go:744: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-335265 --alsologtostderr -v=1: (1.130056619s)
addons_test.go:749: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-qn5pm" [e2c99064-b337-4b88-a8a3-6d5e45c89d41] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-qn5pm" [e2c99064-b337-4b88-a8a3-6d5e45c89d41] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-qn5pm" [e2c99064-b337-4b88-a8a3-6d5e45c89d41] Running
addons_test.go:749: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004147541s
addons_test.go:990: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 addons disable headlamp --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-amd64 -p addons-335265 addons disable headlamp --alsologtostderr -v=1: (5.723754058s)
--- PASS: TestAddons/parallel/Headlamp (19.86s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:786: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-gzjf2" [0dd2c087-2945-4f21-a0dd-723fb3337777] Running
addons_test.go:786: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003663166s
addons_test.go:990: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.64s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:894: (dbg) Run:  kubectl --context addons-335265 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:900: (dbg) Run:  kubectl --context addons-335265 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:904: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335265 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:907: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8d26b065-0c58-41f5-a2d7-d96fbacd3d0b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8d26b065-0c58-41f5-a2d7-d96fbacd3d0b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8d26b065-0c58-41f5-a2d7-d96fbacd3d0b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:907: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004228386s
addons_test.go:912: (dbg) Run:  kubectl --context addons-335265 get pvc test-pvc -o=json
addons_test.go:921: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 ssh "cat /opt/local-path-provisioner/pvc-14e1b505-7a2b-48a9-8f30-4f0b19662b44_default_test-pvc/file1"
addons_test.go:933: (dbg) Run:  kubectl --context addons-335265 delete pod test-local-path
addons_test.go:937: (dbg) Run:  kubectl --context addons-335265 delete pvc test-pvc
addons_test.go:990: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-amd64 -p addons-335265 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.846783715s)
--- PASS: TestAddons/parallel/LocalPath (57.64s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:969: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-hk8t5" [9fc5b35d-0561-41df-ae69-27953695f6e2] Running
addons_test.go:969: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005204518s
addons_test.go:972: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-335265
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:980: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-2cgv8" [78107b1c-17c6-4630-a63d-004e7f7e77e0] Running
addons_test.go:980: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005717646s
addons_test.go:984: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 addons disable yakd --alsologtostderr -v=1
addons_test.go:984: (dbg) Done: out/minikube-linux-amd64 -p addons-335265 addons disable yakd --alsologtostderr -v=1: (6.269324741s)
--- PASS: TestAddons/parallel/Yakd (11.28s)

                                                
                                    
x
+
TestCertOptions (75.08s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-756541 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-756541 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m13.631523994s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-756541 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-756541 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-756541 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-756541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-756541
--- PASS: TestCertOptions (75.08s)

                                                
                                    
x
+
TestCertExpiration (321.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-363290 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-363290 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m26.646934638s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-363290 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-363290 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (53.022012827s)
helpers_test.go:175: Cleaning up "cert-expiration-363290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-363290
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-363290: (1.492417813s)
--- PASS: TestCertExpiration (321.16s)

                                                
                                    
x
+
TestForceSystemdFlag (118.58s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-519066 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-519066 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m57.428080395s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-519066 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-519066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-519066
--- PASS: TestForceSystemdFlag (118.58s)

                                                
                                    
x
+
TestForceSystemdEnv (99.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-391967 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-391967 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m38.846348201s)
helpers_test.go:175: Cleaning up "force-systemd-env-391967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-391967
--- PASS: TestForceSystemdEnv (99.65s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.54s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1004 04:07:39.231979   16879 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1004 04:07:39.232114   16879 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1004 04:07:39.263527   16879 install.go:62] docker-machine-driver-kvm2: exit status 1
W1004 04:07:39.263868   16879 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1004 04:07:39.263929   16879 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3997715360/001/docker-machine-driver-kvm2
I1004 04:07:39.542513   16879 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3997715360/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640] Decompressors:map[bz2:0xc0006b58c0 gz:0xc0006b58c8 tar:0xc0006b5870 tar.bz2:0xc0006b5880 tar.gz:0xc0006b5890 tar.xz:0xc0006b58a0 tar.zst:0xc0006b58b0 tbz2:0xc0006b5880 tgz:0xc0006b5890 txz:0xc0006b58a0 tzst:0xc0006b58b0 xz:0xc0006b58d0 zip:0xc0006b58e0 zst:0xc0006b58d8] Getters:map[file:0xc001a84390 http:0xc000afa550 https:0xc000afa5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1004 04:07:39.542573   16879 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3997715360/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.54s)

                                                
                                    
x
+
TestErrorSpam/setup (41.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-148504 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-148504 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-148504 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-148504 --driver=kvm2  --container-runtime=crio: (41.138754389s)
--- PASS: TestErrorSpam/setup (41.14s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (5.72s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 stop: (2.402701256s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 stop: (1.634385044s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-148504 --log_dir /tmp/nospam-148504 stop: (1.680254301s)
--- PASS: TestErrorSpam/stop (5.72s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19546-9647/.minikube/files/etc/test/nested/copy/16879/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.34s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-994735 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-994735 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m24.342340349s)
--- PASS: TestFunctional/serial/StartWithProxy (84.34s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1004 03:10:06.404636   16879 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-994735 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-994735 --alsologtostderr -v=8: (43.439227879s)
functional_test.go:663: soft start took 43.439913081s for "functional-994735" cluster.
I1004 03:10:49.844337   16879 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (43.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-994735 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-994735 cache add registry.k8s.io/pause:3.1: (1.074197628s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-994735 cache add registry.k8s.io/pause:3.3: (1.29194659s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-994735 cache add registry.k8s.io/pause:latest: (1.108098168s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-994735 /tmp/TestFunctionalserialCacheCmdcacheadd_local3254921837/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 cache add minikube-local-cache-test:functional-994735
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-994735 cache add minikube-local-cache-test:functional-994735: (1.929295717s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 cache delete minikube-local-cache-test:functional-994735
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-994735
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994735 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.485613ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 kubectl -- --context functional-994735 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-994735 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (369.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-994735 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1004 03:12:09.001008   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:09.007418   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:09.018781   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:09.040150   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:09.081442   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:09.162881   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:09.324376   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:09.646067   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:10.288087   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:11.569821   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:14.132684   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:19.254347   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:29.496122   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:12:49.977838   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:13:30.939676   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:14:52.862606   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-994735 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (6m9.196876662s)
functional_test.go:761: restart took 6m9.197002669s for "functional-994735" cluster.
I1004 03:17:07.106954   16879 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (369.20s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-994735 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-994735 logs: (1.368409608s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 logs --file /tmp/TestFunctionalserialLogsFileCmd3792119038/001/logs.txt
E1004 03:17:08.994130   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-994735 logs --file /tmp/TestFunctionalserialLogsFileCmd3792119038/001/logs.txt: (1.361247086s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.44s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-994735 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-994735
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-994735: exit status 115 (266.109214ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.39:32679 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-994735 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994735 config get cpus: exit status 14 (51.470143ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994735 config get cpus: exit status 14 (43.961937ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (32.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-994735 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-994735 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 29091: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (32.64s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-994735 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-994735 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (145.33146ms)

                                                
                                                
-- stdout --
	* [functional-994735] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:17:27.054485   28600 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:17:27.054599   28600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:17:27.054609   28600 out.go:358] Setting ErrFile to fd 2...
	I1004 03:17:27.054616   28600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:17:27.054816   28600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 03:17:27.055347   28600 out.go:352] Setting JSON to false
	I1004 03:17:27.056217   28600 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3592,"bootTime":1728008255,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 03:17:27.056307   28600 start.go:139] virtualization: kvm guest
	I1004 03:17:27.058170   28600 out.go:177] * [functional-994735] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1004 03:17:27.059341   28600 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:17:27.059355   28600 notify.go:220] Checking for updates...
	I1004 03:17:27.061786   28600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:17:27.062958   28600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:17:27.064197   28600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:17:27.065425   28600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 03:17:27.066570   28600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:17:27.068357   28600 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:17:27.068968   28600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:17:27.069043   28600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:17:27.084045   28600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I1004 03:17:27.084476   28600 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:17:27.085082   28600 main.go:141] libmachine: Using API Version  1
	I1004 03:17:27.085100   28600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:17:27.086238   28600 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:17:27.086429   28600 main.go:141] libmachine: (functional-994735) Calling .DriverName
	I1004 03:17:27.086664   28600 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:17:27.087071   28600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:17:27.087116   28600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:17:27.103787   28600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45063
	I1004 03:17:27.104191   28600 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:17:27.105000   28600 main.go:141] libmachine: Using API Version  1
	I1004 03:17:27.105027   28600 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:17:27.105420   28600 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:17:27.105589   28600 main.go:141] libmachine: (functional-994735) Calling .DriverName
	I1004 03:17:27.140018   28600 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 03:17:27.141031   28600 start.go:297] selected driver: kvm2
	I1004 03:17:27.141046   28600 start.go:901] validating driver "kvm2" against &{Name:functional-994735 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-994735 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:17:27.141167   28600 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:17:27.150435   28600 out.go:201] 
	W1004 03:17:27.151595   28600 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1004 03:17:27.152762   28600 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-994735 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-994735 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-994735 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (132.280546ms)

                                                
                                                
-- stdout --
	* [functional-994735] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:17:26.929270   28568 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:17:26.929512   28568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:17:26.929521   28568 out.go:358] Setting ErrFile to fd 2...
	I1004 03:17:26.929525   28568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:17:26.929807   28568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 03:17:26.930290   28568 out.go:352] Setting JSON to false
	I1004 03:17:26.931091   28568 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3592,"bootTime":1728008255,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 03:17:26.931183   28568 start.go:139] virtualization: kvm guest
	I1004 03:17:26.933516   28568 out.go:177] * [functional-994735] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1004 03:17:26.934843   28568 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:17:26.934858   28568 notify.go:220] Checking for updates...
	I1004 03:17:26.937059   28568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:17:26.938515   28568 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	I1004 03:17:26.939637   28568 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	I1004 03:17:26.940870   28568 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 03:17:26.942123   28568 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:17:26.943964   28568 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:17:26.944431   28568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:17:26.944503   28568 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:17:26.959543   28568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I1004 03:17:26.960048   28568 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:17:26.960578   28568 main.go:141] libmachine: Using API Version  1
	I1004 03:17:26.960602   28568 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:17:26.961027   28568 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:17:26.961205   28568 main.go:141] libmachine: (functional-994735) Calling .DriverName
	I1004 03:17:26.961427   28568 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:17:26.961706   28568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:17:26.961739   28568 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:17:26.975875   28568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I1004 03:17:26.976277   28568 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:17:26.976670   28568 main.go:141] libmachine: Using API Version  1
	I1004 03:17:26.976688   28568 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:17:26.976974   28568 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:17:26.977130   28568 main.go:141] libmachine: (functional-994735) Calling .DriverName
	I1004 03:17:27.007513   28568 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1004 03:17:27.008689   28568 start.go:297] selected driver: kvm2
	I1004 03:17:27.008711   28568 start.go:901] validating driver "kvm2" against &{Name:functional-994735 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-994735 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:17:27.008832   28568 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:17:27.011004   28568 out.go:201] 
	W1004 03:17:27.012034   28568 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1004 03:17:27.013013   28568 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-994735 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-994735 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-rqw6n" [72d3da35-c4c2-431d-ba75-fb565a3eaaa2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-rqw6n" [72d3da35-c4c2-431d-ba75-fb565a3eaaa2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003499108s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.39:31104
functional_test.go:1675: http://192.168.39.39:31104: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-rqw6n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.39:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.39:31104
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [40fe9478-ba09-4a74-b915-d1507f3071e4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003815652s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-994735 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-994735 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-994735 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-994735 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [16ca59ee-a1d0-4004-a82c-5b39993ec720] Pending
helpers_test.go:344: "sp-pod" [16ca59ee-a1d0-4004-a82c-5b39993ec720] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [16ca59ee-a1d0-4004-a82c-5b39993ec720] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.004573294s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-994735 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-994735 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-994735 delete -f testdata/storage-provisioner/pod.yaml: (2.510804024s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-994735 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6c6aa981-1ca3-471c-84f0-1a420330d96a] Pending
helpers_test.go:344: "sp-pod" [6c6aa981-1ca3-471c-84f0-1a420330d96a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6c6aa981-1ca3-471c-84f0-1a420330d96a] Running
2024/10/04 03:17:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004065766s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-994735 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh -n functional-994735 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 cp functional-994735:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2490239493/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh -n functional-994735 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh -n functional-994735 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-994735 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-m95z5" [7884face-b98b-4ab4-a60b-53e2563a5351] Pending
helpers_test.go:344: "mysql-6cdb49bbb-m95z5" [7884face-b98b-4ab4-a60b-53e2563a5351] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-m95z5" [7884face-b98b-4ab4-a60b-53e2563a5351] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.067831785s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-994735 exec mysql-6cdb49bbb-m95z5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-994735 exec mysql-6cdb49bbb-m95z5 -- mysql -ppassword -e "show databases;": exit status 1 (195.117414ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1004 03:17:55.831739   16879 retry.go:31] will retry after 1.152667608s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-994735 exec mysql-6cdb49bbb-m95z5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-994735 exec mysql-6cdb49bbb-m95z5 -- mysql -ppassword -e "show databases;": exit status 1 (140.358932ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1004 03:17:57.125113   16879 retry.go:31] will retry after 1.35845016s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-994735 exec mysql-6cdb49bbb-m95z5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/16879/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "sudo cat /etc/test/nested/copy/16879/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/16879.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "sudo cat /etc/ssl/certs/16879.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/16879.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "sudo cat /usr/share/ca-certificates/16879.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/168792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "sudo cat /etc/ssl/certs/168792.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/168792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "sudo cat /usr/share/ca-certificates/168792.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-994735 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994735 ssh "sudo systemctl is-active docker": exit status 1 (261.504517ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994735 ssh "sudo systemctl is-active containerd": exit status 1 (247.842055ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-994735 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-994735 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-vl8lj" [27b02b53-fa59-4136-817b-8584b43c65e1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-vl8lj" [27b02b53-fa59-4136-817b-8584b43c65e1] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004287961s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "297.589342ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.88836ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "334.150953ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "42.525877ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-994735 /tmp/TestFunctionalparallelMountCmdany-port2847978451/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728011836731307200" to /tmp/TestFunctionalparallelMountCmdany-port2847978451/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728011836731307200" to /tmp/TestFunctionalparallelMountCmdany-port2847978451/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728011836731307200" to /tmp/TestFunctionalparallelMountCmdany-port2847978451/001/test-1728011836731307200
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994735 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (247.84574ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1004 03:17:16.979454   16879 retry.go:31] will retry after 624.261815ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  4 03:17 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  4 03:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  4 03:17 test-1728011836731307200
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh cat /mount-9p/test-1728011836731307200
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-994735 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e8c4f01a-5dcf-4273-b0f7-9bb68e1989fe] Pending
helpers_test.go:344: "busybox-mount" [e8c4f01a-5dcf-4273-b0f7-9bb68e1989fe] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e8c4f01a-5dcf-4273-b0f7-9bb68e1989fe] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e8c4f01a-5dcf-4273-b0f7-9bb68e1989fe] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.003151477s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-994735 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-994735 /tmp/TestFunctionalparallelMountCmdany-port2847978451/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-994735 /tmp/TestFunctionalparallelMountCmdspecific-port2940932545/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994735 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (251.050361ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1004 03:17:26.812740   16879 retry.go:31] will retry after 441.441467ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-994735 /tmp/TestFunctionalparallelMountCmdspecific-port2940932545/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994735 ssh "sudo umount -f /mount-9p": exit status 1 (233.478454ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-994735 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-994735 /tmp/TestFunctionalparallelMountCmdspecific-port2940932545/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 service list -o json
functional_test.go:1494: Took "344.309787ms" to run "out/minikube-linux-amd64 -p functional-994735 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.39:30091
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-994735 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3349812496/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-994735 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3349812496/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-994735 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3349812496/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994735 ssh "findmnt -T" /mount1: exit status 1 (308.645652ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1004 03:17:28.683842   16879 retry.go:31] will retry after 698.74674ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-994735 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-994735 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3349812496/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-994735 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3349812496/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-994735 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3349812496/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.39:30091
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-994735 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-994735
localhost/kicbase/echo-server:functional-994735
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-994735 image ls --format short --alsologtostderr:
I1004 03:17:50.264212   30224 out.go:345] Setting OutFile to fd 1 ...
I1004 03:17:50.264313   30224 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:17:50.264321   30224 out.go:358] Setting ErrFile to fd 2...
I1004 03:17:50.264326   30224 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:17:50.264493   30224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
I1004 03:17:50.265048   30224 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:17:50.265152   30224 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:17:50.265502   30224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 03:17:50.265542   30224 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 03:17:50.280012   30224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36149
I1004 03:17:50.280469   30224 main.go:141] libmachine: () Calling .GetVersion
I1004 03:17:50.280961   30224 main.go:141] libmachine: Using API Version  1
I1004 03:17:50.280981   30224 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 03:17:50.281368   30224 main.go:141] libmachine: () Calling .GetMachineName
I1004 03:17:50.281570   30224 main.go:141] libmachine: (functional-994735) Calling .GetState
I1004 03:17:50.283387   30224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 03:17:50.283435   30224 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 03:17:50.297579   30224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42863
I1004 03:17:50.297925   30224 main.go:141] libmachine: () Calling .GetVersion
I1004 03:17:50.298341   30224 main.go:141] libmachine: Using API Version  1
I1004 03:17:50.298356   30224 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 03:17:50.298632   30224 main.go:141] libmachine: () Calling .GetMachineName
I1004 03:17:50.298819   30224 main.go:141] libmachine: (functional-994735) Calling .DriverName
I1004 03:17:50.299023   30224 ssh_runner.go:195] Run: systemctl --version
I1004 03:17:50.299049   30224 main.go:141] libmachine: (functional-994735) Calling .GetSSHHostname
I1004 03:17:50.301714   30224 main.go:141] libmachine: (functional-994735) DBG | domain functional-994735 has defined MAC address 52:54:00:90:5c:c3 in network mk-functional-994735
I1004 03:17:50.302071   30224 main.go:141] libmachine: (functional-994735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:5c:c3", ip: ""} in network mk-functional-994735: {Iface:virbr1 ExpiryTime:2024-10-04 04:08:56 +0000 UTC Type:0 Mac:52:54:00:90:5c:c3 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:functional-994735 Clientid:01:52:54:00:90:5c:c3}
I1004 03:17:50.302093   30224 main.go:141] libmachine: (functional-994735) DBG | domain functional-994735 has defined IP address 192.168.39.39 and MAC address 52:54:00:90:5c:c3 in network mk-functional-994735
I1004 03:17:50.302231   30224 main.go:141] libmachine: (functional-994735) Calling .GetSSHPort
I1004 03:17:50.302394   30224 main.go:141] libmachine: (functional-994735) Calling .GetSSHKeyPath
I1004 03:17:50.302530   30224 main.go:141] libmachine: (functional-994735) Calling .GetSSHUsername
I1004 03:17:50.302644   30224 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/functional-994735/id_rsa Username:docker}
I1004 03:17:50.386968   30224 ssh_runner.go:195] Run: sudo crictl images --output json
I1004 03:17:50.431896   30224 main.go:141] libmachine: Making call to close driver server
I1004 03:17:50.431914   30224 main.go:141] libmachine: (functional-994735) Calling .Close
I1004 03:17:50.432210   30224 main.go:141] libmachine: (functional-994735) DBG | Closing plugin on server side
I1004 03:17:50.432281   30224 main.go:141] libmachine: Successfully made call to close driver server
I1004 03:17:50.432306   30224 main.go:141] libmachine: Making call to close connection to plugin binary
I1004 03:17:50.432318   30224 main.go:141] libmachine: Making call to close driver server
I1004 03:17:50.432329   30224 main.go:141] libmachine: (functional-994735) Calling .Close
I1004 03:17:50.432559   30224 main.go:141] libmachine: Successfully made call to close driver server
I1004 03:17:50.432575   30224 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-994735 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-994735  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 7f553e8bbc897 | 196MB  |
| localhost/minikube-local-cache-test     | functional-994735  | 9b4278813d98a | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| localhost/my-image                      | functional-994735  | 5ee5e4b52d191 | 1.47MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-994735 image ls --format table --alsologtostderr:
I1004 03:17:57.892123   30453 out.go:345] Setting OutFile to fd 1 ...
I1004 03:17:57.892256   30453 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:17:57.892265   30453 out.go:358] Setting ErrFile to fd 2...
I1004 03:17:57.892269   30453 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:17:57.892421   30453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
I1004 03:17:57.893017   30453 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:17:57.893122   30453 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:17:57.893479   30453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 03:17:57.893519   30453 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 03:17:57.907703   30453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45071
I1004 03:17:57.908253   30453 main.go:141] libmachine: () Calling .GetVersion
I1004 03:17:57.908789   30453 main.go:141] libmachine: Using API Version  1
I1004 03:17:57.908810   30453 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 03:17:57.909600   30453 main.go:141] libmachine: () Calling .GetMachineName
I1004 03:17:57.910495   30453 main.go:141] libmachine: (functional-994735) Calling .GetState
I1004 03:17:57.912331   30453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 03:17:57.912368   30453 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 03:17:57.927540   30453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36245
I1004 03:17:57.927957   30453 main.go:141] libmachine: () Calling .GetVersion
I1004 03:17:57.928392   30453 main.go:141] libmachine: Using API Version  1
I1004 03:17:57.928412   30453 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 03:17:57.928686   30453 main.go:141] libmachine: () Calling .GetMachineName
I1004 03:17:57.928859   30453 main.go:141] libmachine: (functional-994735) Calling .DriverName
I1004 03:17:57.929069   30453 ssh_runner.go:195] Run: systemctl --version
I1004 03:17:57.929090   30453 main.go:141] libmachine: (functional-994735) Calling .GetSSHHostname
I1004 03:17:57.931603   30453 main.go:141] libmachine: (functional-994735) DBG | domain functional-994735 has defined MAC address 52:54:00:90:5c:c3 in network mk-functional-994735
I1004 03:17:57.931979   30453 main.go:141] libmachine: (functional-994735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:5c:c3", ip: ""} in network mk-functional-994735: {Iface:virbr1 ExpiryTime:2024-10-04 04:08:56 +0000 UTC Type:0 Mac:52:54:00:90:5c:c3 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:functional-994735 Clientid:01:52:54:00:90:5c:c3}
I1004 03:17:57.932011   30453 main.go:141] libmachine: (functional-994735) DBG | domain functional-994735 has defined IP address 192.168.39.39 and MAC address 52:54:00:90:5c:c3 in network mk-functional-994735
I1004 03:17:57.932114   30453 main.go:141] libmachine: (functional-994735) Calling .GetSSHPort
I1004 03:17:57.932287   30453 main.go:141] libmachine: (functional-994735) Calling .GetSSHKeyPath
I1004 03:17:57.932425   30453 main.go:141] libmachine: (functional-994735) Calling .GetSSHUsername
I1004 03:17:57.932561   30453 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/functional-994735/id_rsa Username:docker}
I1004 03:17:58.014248   30453 ssh_runner.go:195] Run: sudo crictl images --output json
I1004 03:17:58.063897   30453 main.go:141] libmachine: Making call to close driver server
I1004 03:17:58.063911   30453 main.go:141] libmachine: (functional-994735) Calling .Close
I1004 03:17:58.064202   30453 main.go:141] libmachine: Successfully made call to close driver server
I1004 03:17:58.064223   30453 main.go:141] libmachine: Making call to close connection to plugin binary
I1004 03:17:58.064241   30453 main.go:141] libmachine: Making call to close driver server
I1004 03:17:58.064250   30453 main.go:141] libmachine: (functional-994735) Calling .Close
I1004 03:17:58.064496   30453 main.go:141] libmachine: Successfully made call to close driver server
I1004 03:17:58.064514   30453 main.go:141] libmachine: Making call to close connection to plugin binary
I1004 03:17:58.064496   30453 main.go:141] libmachine: (functional-994735) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-994735 image ls --format json --alsologtostderr:
[{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"5ee5e4b52d1919ab0497d6bfe6a88f2be674d23b86cbd99fdc7f549858ec2f08","repoDigests":["localhost/my-image@sha256:32182e577b6d6a0e19b840a057772d3557c526b7ee220c721d160b52f61efc35"],"repoTags":["localhost/my-image:functional-994735"],"size":"1468599"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"8943
7508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/
k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"9b4278813d98a1bcebefec78f999a7f59bd8bffc61ad5ae7414f71944387fa8a","repoDigests":["localhost/minikube-local-cache-test@sha256:18cdf78ea28209ce7f83b6cf30b8b9a9d818e4a7c35d5868819a196079edafab"],"repoTags":["localhost/minikube-local-cache-test:functional-994735"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227
e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0","repoDigests":["docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818028"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/
pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"8d8f9f44b88b35c904b4cda001ab6a1cd47633edfbfd61de0e005703f5f6d615","repoDigests":["docker.io/library/4b1040c6c86779aea6e4ddca4d5709097201f4826efef368af2496cea2e1587f-tmp@sha256:3c67039d0708b051c856838a27d639e52af1657f8432b118bf2c2dfa4993238c"],"repoTags":[],"size":"1466018"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb2
8c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-994735"],"size":"4943877"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-994735 image ls --format json --alsologtostderr:
I1004 03:17:57.682497   30429 out.go:345] Setting OutFile to fd 1 ...
I1004 03:17:57.682747   30429 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:17:57.682756   30429 out.go:358] Setting ErrFile to fd 2...
I1004 03:17:57.682761   30429 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:17:57.682937   30429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
I1004 03:17:57.683571   30429 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:17:57.683667   30429 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:17:57.684059   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 03:17:57.684098   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 03:17:57.700317   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39543
I1004 03:17:57.700826   30429 main.go:141] libmachine: () Calling .GetVersion
I1004 03:17:57.701550   30429 main.go:141] libmachine: Using API Version  1
I1004 03:17:57.701584   30429 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 03:17:57.701978   30429 main.go:141] libmachine: () Calling .GetMachineName
I1004 03:17:57.702208   30429 main.go:141] libmachine: (functional-994735) Calling .GetState
I1004 03:17:57.704214   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 03:17:57.704252   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 03:17:57.718880   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44951
I1004 03:17:57.719377   30429 main.go:141] libmachine: () Calling .GetVersion
I1004 03:17:57.719966   30429 main.go:141] libmachine: Using API Version  1
I1004 03:17:57.719989   30429 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 03:17:57.720304   30429 main.go:141] libmachine: () Calling .GetMachineName
I1004 03:17:57.720485   30429 main.go:141] libmachine: (functional-994735) Calling .DriverName
I1004 03:17:57.720717   30429 ssh_runner.go:195] Run: systemctl --version
I1004 03:17:57.720748   30429 main.go:141] libmachine: (functional-994735) Calling .GetSSHHostname
I1004 03:17:57.723567   30429 main.go:141] libmachine: (functional-994735) DBG | domain functional-994735 has defined MAC address 52:54:00:90:5c:c3 in network mk-functional-994735
I1004 03:17:57.724022   30429 main.go:141] libmachine: (functional-994735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:5c:c3", ip: ""} in network mk-functional-994735: {Iface:virbr1 ExpiryTime:2024-10-04 04:08:56 +0000 UTC Type:0 Mac:52:54:00:90:5c:c3 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:functional-994735 Clientid:01:52:54:00:90:5c:c3}
I1004 03:17:57.724062   30429 main.go:141] libmachine: (functional-994735) DBG | domain functional-994735 has defined IP address 192.168.39.39 and MAC address 52:54:00:90:5c:c3 in network mk-functional-994735
I1004 03:17:57.724195   30429 main.go:141] libmachine: (functional-994735) Calling .GetSSHPort
I1004 03:17:57.724360   30429 main.go:141] libmachine: (functional-994735) Calling .GetSSHKeyPath
I1004 03:17:57.724538   30429 main.go:141] libmachine: (functional-994735) Calling .GetSSHUsername
I1004 03:17:57.724678   30429 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/functional-994735/id_rsa Username:docker}
I1004 03:17:57.810162   30429 ssh_runner.go:195] Run: sudo crictl images --output json
I1004 03:17:57.848891   30429 main.go:141] libmachine: Making call to close driver server
I1004 03:17:57.848901   30429 main.go:141] libmachine: (functional-994735) Calling .Close
I1004 03:17:57.849145   30429 main.go:141] libmachine: Successfully made call to close driver server
I1004 03:17:57.849164   30429 main.go:141] libmachine: Making call to close connection to plugin binary
I1004 03:17:57.849182   30429 main.go:141] libmachine: (functional-994735) DBG | Closing plugin on server side
I1004 03:17:57.849242   30429 main.go:141] libmachine: Making call to close driver server
I1004 03:17:57.849266   30429 main.go:141] libmachine: (functional-994735) Calling .Close
I1004 03:17:57.849476   30429 main.go:141] libmachine: Successfully made call to close driver server
I1004 03:17:57.849492   30429 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-994735 image ls --format yaml --alsologtostderr:
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-994735
size: "4943877"
- id: 9b4278813d98a1bcebefec78f999a7f59bd8bffc61ad5ae7414f71944387fa8a
repoDigests:
- localhost/minikube-local-cache-test@sha256:18cdf78ea28209ce7f83b6cf30b8b9a9d818e4a7c35d5868819a196079edafab
repoTags:
- localhost/minikube-local-cache-test:functional-994735
size: "3330"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0
repoDigests:
- docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "195818028"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-994735 image ls --format yaml --alsologtostderr:
I1004 03:17:50.476478   30248 out.go:345] Setting OutFile to fd 1 ...
I1004 03:17:50.476577   30248 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:17:50.476586   30248 out.go:358] Setting ErrFile to fd 2...
I1004 03:17:50.476591   30248 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:17:50.476757   30248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
I1004 03:17:50.477291   30248 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:17:50.477383   30248 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:17:50.477729   30248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 03:17:50.477762   30248 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 03:17:50.492054   30248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42813
I1004 03:17:50.492593   30248 main.go:141] libmachine: () Calling .GetVersion
I1004 03:17:50.493155   30248 main.go:141] libmachine: Using API Version  1
I1004 03:17:50.493173   30248 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 03:17:50.493485   30248 main.go:141] libmachine: () Calling .GetMachineName
I1004 03:17:50.493657   30248 main.go:141] libmachine: (functional-994735) Calling .GetState
I1004 03:17:50.495588   30248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 03:17:50.495624   30248 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 03:17:50.509559   30248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
I1004 03:17:50.510014   30248 main.go:141] libmachine: () Calling .GetVersion
I1004 03:17:50.510528   30248 main.go:141] libmachine: Using API Version  1
I1004 03:17:50.510547   30248 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 03:17:50.510836   30248 main.go:141] libmachine: () Calling .GetMachineName
I1004 03:17:50.511015   30248 main.go:141] libmachine: (functional-994735) Calling .DriverName
I1004 03:17:50.511193   30248 ssh_runner.go:195] Run: systemctl --version
I1004 03:17:50.511233   30248 main.go:141] libmachine: (functional-994735) Calling .GetSSHHostname
I1004 03:17:50.513857   30248 main.go:141] libmachine: (functional-994735) DBG | domain functional-994735 has defined MAC address 52:54:00:90:5c:c3 in network mk-functional-994735
I1004 03:17:50.514291   30248 main.go:141] libmachine: (functional-994735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:5c:c3", ip: ""} in network mk-functional-994735: {Iface:virbr1 ExpiryTime:2024-10-04 04:08:56 +0000 UTC Type:0 Mac:52:54:00:90:5c:c3 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:functional-994735 Clientid:01:52:54:00:90:5c:c3}
I1004 03:17:50.514324   30248 main.go:141] libmachine: (functional-994735) DBG | domain functional-994735 has defined IP address 192.168.39.39 and MAC address 52:54:00:90:5c:c3 in network mk-functional-994735
I1004 03:17:50.514487   30248 main.go:141] libmachine: (functional-994735) Calling .GetSSHPort
I1004 03:17:50.514649   30248 main.go:141] libmachine: (functional-994735) Calling .GetSSHKeyPath
I1004 03:17:50.514817   30248 main.go:141] libmachine: (functional-994735) Calling .GetSSHUsername
I1004 03:17:50.514951   30248 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/functional-994735/id_rsa Username:docker}
I1004 03:17:50.598632   30248 ssh_runner.go:195] Run: sudo crictl images --output json
I1004 03:17:50.640539   30248 main.go:141] libmachine: Making call to close driver server
I1004 03:17:50.640551   30248 main.go:141] libmachine: (functional-994735) Calling .Close
I1004 03:17:50.640815   30248 main.go:141] libmachine: Successfully made call to close driver server
I1004 03:17:50.640832   30248 main.go:141] libmachine: Making call to close connection to plugin binary
I1004 03:17:50.640852   30248 main.go:141] libmachine: Making call to close driver server
I1004 03:17:50.640860   30248 main.go:141] libmachine: (functional-994735) Calling .Close
I1004 03:17:50.640858   30248 main.go:141] libmachine: (functional-994735) DBG | Closing plugin on server side
I1004 03:17:50.641107   30248 main.go:141] libmachine: (functional-994735) DBG | Closing plugin on server side
I1004 03:17:50.641105   30248 main.go:141] libmachine: Successfully made call to close driver server
I1004 03:17:50.641150   30248 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994735 ssh pgrep buildkitd: exit status 1 (183.425202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image build -t localhost/my-image:functional-994735 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-994735 image build -t localhost/my-image:functional-994735 testdata/build --alsologtostderr: (6.531866724s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-994735 image build -t localhost/my-image:functional-994735 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8d8f9f44b88
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-994735
--> 5ee5e4b52d1
Successfully tagged localhost/my-image:functional-994735
5ee5e4b52d1919ab0497d6bfe6a88f2be674d23b86cbd99fdc7f549858ec2f08
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-994735 image build -t localhost/my-image:functional-994735 testdata/build --alsologtostderr:
I1004 03:17:50.868965   30303 out.go:345] Setting OutFile to fd 1 ...
I1004 03:17:50.869096   30303 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:17:50.869105   30303 out.go:358] Setting ErrFile to fd 2...
I1004 03:17:50.869110   30303 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:17:50.869297   30303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
I1004 03:17:50.869893   30303 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:17:50.870444   30303 config.go:182] Loaded profile config "functional-994735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1004 03:17:50.870776   30303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 03:17:50.870811   30303 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 03:17:50.885548   30303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40603
I1004 03:17:50.885965   30303 main.go:141] libmachine: () Calling .GetVersion
I1004 03:17:50.886539   30303 main.go:141] libmachine: Using API Version  1
I1004 03:17:50.886567   30303 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 03:17:50.886898   30303 main.go:141] libmachine: () Calling .GetMachineName
I1004 03:17:50.887156   30303 main.go:141] libmachine: (functional-994735) Calling .GetState
I1004 03:17:50.889025   30303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 03:17:50.889064   30303 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 03:17:50.904097   30303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
I1004 03:17:50.904547   30303 main.go:141] libmachine: () Calling .GetVersion
I1004 03:17:50.905049   30303 main.go:141] libmachine: Using API Version  1
I1004 03:17:50.905098   30303 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 03:17:50.905455   30303 main.go:141] libmachine: () Calling .GetMachineName
I1004 03:17:50.905642   30303 main.go:141] libmachine: (functional-994735) Calling .DriverName
I1004 03:17:50.905831   30303 ssh_runner.go:195] Run: systemctl --version
I1004 03:17:50.905865   30303 main.go:141] libmachine: (functional-994735) Calling .GetSSHHostname
I1004 03:17:50.908642   30303 main.go:141] libmachine: (functional-994735) DBG | domain functional-994735 has defined MAC address 52:54:00:90:5c:c3 in network mk-functional-994735
I1004 03:17:50.909012   30303 main.go:141] libmachine: (functional-994735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:5c:c3", ip: ""} in network mk-functional-994735: {Iface:virbr1 ExpiryTime:2024-10-04 04:08:56 +0000 UTC Type:0 Mac:52:54:00:90:5c:c3 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:functional-994735 Clientid:01:52:54:00:90:5c:c3}
I1004 03:17:50.909037   30303 main.go:141] libmachine: (functional-994735) DBG | domain functional-994735 has defined IP address 192.168.39.39 and MAC address 52:54:00:90:5c:c3 in network mk-functional-994735
I1004 03:17:50.909172   30303 main.go:141] libmachine: (functional-994735) Calling .GetSSHPort
I1004 03:17:50.909357   30303 main.go:141] libmachine: (functional-994735) Calling .GetSSHKeyPath
I1004 03:17:50.909497   30303 main.go:141] libmachine: (functional-994735) Calling .GetSSHUsername
I1004 03:17:50.909639   30303 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/functional-994735/id_rsa Username:docker}
I1004 03:17:50.994663   30303 build_images.go:161] Building image from path: /tmp/build.3549576108.tar
I1004 03:17:50.994723   30303 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1004 03:17:51.009049   30303 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3549576108.tar
I1004 03:17:51.018982   30303 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3549576108.tar: stat -c "%s %y" /var/lib/minikube/build/build.3549576108.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3549576108.tar': No such file or directory
I1004 03:17:51.019019   30303 ssh_runner.go:362] scp /tmp/build.3549576108.tar --> /var/lib/minikube/build/build.3549576108.tar (3072 bytes)
I1004 03:17:51.061879   30303 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3549576108
I1004 03:17:51.076771   30303 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3549576108 -xf /var/lib/minikube/build/build.3549576108.tar
I1004 03:17:51.088191   30303 crio.go:315] Building image: /var/lib/minikube/build/build.3549576108
I1004 03:17:51.088257   30303 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-994735 /var/lib/minikube/build/build.3549576108 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1004 03:17:57.336001   30303 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-994735 /var/lib/minikube/build/build.3549576108 --cgroup-manager=cgroupfs: (6.247711989s)
I1004 03:17:57.336050   30303 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3549576108
I1004 03:17:57.346673   30303 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3549576108.tar
I1004 03:17:57.356442   30303 build_images.go:217] Built localhost/my-image:functional-994735 from /tmp/build.3549576108.tar
I1004 03:17:57.356474   30303 build_images.go:133] succeeded building to: functional-994735
I1004 03:17:57.356479   30303 build_images.go:134] failed building to: 
I1004 03:17:57.356525   30303 main.go:141] libmachine: Making call to close driver server
I1004 03:17:57.356550   30303 main.go:141] libmachine: (functional-994735) Calling .Close
I1004 03:17:57.356850   30303 main.go:141] libmachine: Successfully made call to close driver server
I1004 03:17:57.356867   30303 main.go:141] libmachine: Making call to close connection to plugin binary
I1004 03:17:57.356875   30303 main.go:141] libmachine: (functional-994735) DBG | Closing plugin on server side
I1004 03:17:57.356880   30303 main.go:141] libmachine: Making call to close driver server
I1004 03:17:57.356889   30303 main.go:141] libmachine: (functional-994735) Calling .Close
I1004 03:17:57.357156   30303 main.go:141] libmachine: (functional-994735) DBG | Closing plugin on server side
I1004 03:17:57.357174   30303 main.go:141] libmachine: Successfully made call to close driver server
I1004 03:17:57.357200   30303 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.96014413s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-994735
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image load --daemon kicbase/echo-server:functional-994735 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-994735 image load --daemon kicbase/echo-server:functional-994735 --alsologtostderr: (2.254875177s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image load --daemon kicbase/echo-server:functional-994735 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
E1004 03:17:36.704927   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-994735
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image load --daemon kicbase/echo-server:functional-994735 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image save kicbase/echo-server:functional-994735 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image rm kicbase/echo-server:functional-994735 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-994735 image rm kicbase/echo-server:functional-994735 --alsologtostderr: (1.498951344s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-994735 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.66883901s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-994735
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-994735 image save --daemon kicbase/echo-server:functional-994735 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-994735
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-994735
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-994735
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-994735
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (197.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-994751 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-994751 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m17.233188811s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (197.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-994751 -- rollout status deployment/busybox: (5.451943646s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-nrdqk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-vh5j6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-wc5kg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-nrdqk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-vh5j6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-wc5kg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-nrdqk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-vh5j6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-wc5kg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-nrdqk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-nrdqk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-vh5j6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-vh5j6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-wc5kg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-994751 -- exec busybox-7dff88458-wc5kg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-994751 -v=7 --alsologtostderr
E1004 03:22:08.994291   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:22:15.014519   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:22:15.020895   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:22:15.032239   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:22:15.053642   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:22:15.095041   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:22:15.176486   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:22:15.338734   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:22:15.660395   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:22:16.301888   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:22:17.584033   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:22:20.145452   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:22:25.267226   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-994751 -v=7 --alsologtostderr: (58.29980975s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-994751 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp testdata/cp-test.txt ha-994751:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1363640037/001/cp-test_ha-994751.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751:/home/docker/cp-test.txt ha-994751-m02:/home/docker/cp-test_ha-994751_ha-994751-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m02 "sudo cat /home/docker/cp-test_ha-994751_ha-994751-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751:/home/docker/cp-test.txt ha-994751-m03:/home/docker/cp-test_ha-994751_ha-994751-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m03 "sudo cat /home/docker/cp-test_ha-994751_ha-994751-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751:/home/docker/cp-test.txt ha-994751-m04:/home/docker/cp-test_ha-994751_ha-994751-m04.txt
E1004 03:22:35.509093   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m04 "sudo cat /home/docker/cp-test_ha-994751_ha-994751-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp testdata/cp-test.txt ha-994751-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1363640037/001/cp-test_ha-994751-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751-m02:/home/docker/cp-test.txt ha-994751:/home/docker/cp-test_ha-994751-m02_ha-994751.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751 "sudo cat /home/docker/cp-test_ha-994751-m02_ha-994751.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751-m02:/home/docker/cp-test.txt ha-994751-m03:/home/docker/cp-test_ha-994751-m02_ha-994751-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m03 "sudo cat /home/docker/cp-test_ha-994751-m02_ha-994751-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751-m02:/home/docker/cp-test.txt ha-994751-m04:/home/docker/cp-test_ha-994751-m02_ha-994751-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m04 "sudo cat /home/docker/cp-test_ha-994751-m02_ha-994751-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp testdata/cp-test.txt ha-994751-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1363640037/001/cp-test_ha-994751-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt ha-994751:/home/docker/cp-test_ha-994751-m03_ha-994751.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751 "sudo cat /home/docker/cp-test_ha-994751-m03_ha-994751.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt ha-994751-m02:/home/docker/cp-test_ha-994751-m03_ha-994751-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m02 "sudo cat /home/docker/cp-test_ha-994751-m03_ha-994751-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751-m03:/home/docker/cp-test.txt ha-994751-m04:/home/docker/cp-test_ha-994751-m03_ha-994751-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m04 "sudo cat /home/docker/cp-test_ha-994751-m03_ha-994751-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp testdata/cp-test.txt ha-994751-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1363640037/001/cp-test_ha-994751-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt ha-994751:/home/docker/cp-test_ha-994751-m04_ha-994751.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751 "sudo cat /home/docker/cp-test_ha-994751-m04_ha-994751.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt ha-994751-m02:/home/docker/cp-test_ha-994751-m04_ha-994751-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m02 "sudo cat /home/docker/cp-test_ha-994751-m04_ha-994751-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 cp ha-994751-m04:/home/docker/cp-test.txt ha-994751-m03:/home/docker/cp-test_ha-994751-m04_ha-994751-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 ssh -n ha-994751-m03 "sudo cat /home/docker/cp-test_ha-994751-m04_ha-994751-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-994751 node delete m03 -v=7 --alsologtostderr: (15.859751357s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (346.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-994751 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1004 03:37:08.996369   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:37:15.015962   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:38:38.079924   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-994751 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m45.527082554s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (346.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-994751 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-994751 --control-plane -v=7 --alsologtostderr: (1m16.608991298s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-994751 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-237142 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1004 03:42:08.996043   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:42:15.016615   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-237142 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m22.408992778s)
--- PASS: TestJSONOutput/start/Command (82.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-237142 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-237142 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-237142 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-237142 --output=json --user=testUser: (7.355445879s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-457294 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-457294 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.008362ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b79c851b-84ab-4a45-b893-a49e4779b470","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-457294] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bbf599ef-930d-4c4e-86b9-518ca449cecb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19546"}}
	{"specversion":"1.0","id":"07e037b4-995c-42ec-ba91-5e09799c8eaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"041d84b5-7bd6-4e00-b06b-2d0f8f5b8903","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig"}}
	{"specversion":"1.0","id":"9450e3d5-dd86-4779-8e94-50f39ff46688","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube"}}
	{"specversion":"1.0","id":"dac73eb4-5cb8-4f52-a236-1e362b2f8097","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"de100207-9490-4696-8058-47d2496c2323","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c814a79b-8f31-43bd-9bf7-5bb4fb25355f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-457294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-457294
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (90.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-079257 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-079257 --driver=kvm2  --container-runtime=crio: (43.77256047s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-088606 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-088606 --driver=kvm2  --container-runtime=crio: (44.376861498s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-079257
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-088606
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-088606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-088606
helpers_test.go:175: Cleaning up "first-079257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-079257
--- PASS: TestMinikubeProfile (90.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-171469 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-171469 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.176849723s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-171469 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-171469 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-184489 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1004 03:45:12.069097   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-184489 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.332944034s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-184489 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-184489 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-171469 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-184489 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-184489 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-184489
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-184489: (1.269524035s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-184489
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-184489: (19.78726232s)
--- PASS: TestMountStart/serial/RestartStopped (20.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-184489 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-184489 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-355278 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1004 03:47:08.995497   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:47:15.014167   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-355278 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.795421309s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-355278 -- rollout status deployment/busybox: (4.054913862s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- exec busybox-7dff88458-9vdx7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- exec busybox-7dff88458-n69h5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- exec busybox-7dff88458-9vdx7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- exec busybox-7dff88458-n69h5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- exec busybox-7dff88458-9vdx7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- exec busybox-7dff88458-n69h5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.46s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- exec busybox-7dff88458-9vdx7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- exec busybox-7dff88458-9vdx7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- exec busybox-7dff88458-n69h5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-355278 -- exec busybox-7dff88458-n69h5 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-355278 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-355278 -v 3 --alsologtostderr: (49.844722851s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.41s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-355278 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 cp testdata/cp-test.txt multinode-355278:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 cp multinode-355278:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile498822491/001/cp-test_multinode-355278.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 cp multinode-355278:/home/docker/cp-test.txt multinode-355278-m02:/home/docker/cp-test_multinode-355278_multinode-355278-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278-m02 "sudo cat /home/docker/cp-test_multinode-355278_multinode-355278-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 cp multinode-355278:/home/docker/cp-test.txt multinode-355278-m03:/home/docker/cp-test_multinode-355278_multinode-355278-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278-m03 "sudo cat /home/docker/cp-test_multinode-355278_multinode-355278-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 cp testdata/cp-test.txt multinode-355278-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 cp multinode-355278-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile498822491/001/cp-test_multinode-355278-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 cp multinode-355278-m02:/home/docker/cp-test.txt multinode-355278:/home/docker/cp-test_multinode-355278-m02_multinode-355278.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278 "sudo cat /home/docker/cp-test_multinode-355278-m02_multinode-355278.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 cp multinode-355278-m02:/home/docker/cp-test.txt multinode-355278-m03:/home/docker/cp-test_multinode-355278-m02_multinode-355278-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278-m03 "sudo cat /home/docker/cp-test_multinode-355278-m02_multinode-355278-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 cp testdata/cp-test.txt multinode-355278-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 cp multinode-355278-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile498822491/001/cp-test_multinode-355278-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 cp multinode-355278-m03:/home/docker/cp-test.txt multinode-355278:/home/docker/cp-test_multinode-355278-m03_multinode-355278.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278 "sudo cat /home/docker/cp-test_multinode-355278-m03_multinode-355278.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 cp multinode-355278-m03:/home/docker/cp-test.txt multinode-355278-m02:/home/docker/cp-test_multinode-355278-m03_multinode-355278-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 ssh -n multinode-355278-m02 "sudo cat /home/docker/cp-test_multinode-355278-m03_multinode-355278-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-355278 node stop m03: (1.43162246s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-355278 status: exit status 7 (414.102956ms)

                                                
                                                
-- stdout --
	multinode-355278
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-355278-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-355278-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-355278 status --alsologtostderr: exit status 7 (413.127276ms)

                                                
                                                
-- stdout --
	multinode-355278
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-355278-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-355278-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:48:44.661545   47534 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:48:44.661772   47534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:48:44.661781   47534 out.go:358] Setting ErrFile to fd 2...
	I1004 03:48:44.661785   47534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:48:44.661944   47534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-9647/.minikube/bin
	I1004 03:48:44.662084   47534 out.go:352] Setting JSON to false
	I1004 03:48:44.662107   47534 mustload.go:65] Loading cluster: multinode-355278
	I1004 03:48:44.662243   47534 notify.go:220] Checking for updates...
	I1004 03:48:44.662453   47534 config.go:182] Loaded profile config "multinode-355278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1004 03:48:44.662467   47534 status.go:174] checking status of multinode-355278 ...
	I1004 03:48:44.662817   47534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:48:44.662870   47534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:48:44.681552   47534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44479
	I1004 03:48:44.682035   47534 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:48:44.682597   47534 main.go:141] libmachine: Using API Version  1
	I1004 03:48:44.682627   47534 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:48:44.682993   47534 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:48:44.683148   47534 main.go:141] libmachine: (multinode-355278) Calling .GetState
	I1004 03:48:44.684633   47534 status.go:371] multinode-355278 host status = "Running" (err=<nil>)
	I1004 03:48:44.684647   47534 host.go:66] Checking if "multinode-355278" exists ...
	I1004 03:48:44.684939   47534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:48:44.684972   47534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:48:44.700381   47534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44455
	I1004 03:48:44.700806   47534 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:48:44.701230   47534 main.go:141] libmachine: Using API Version  1
	I1004 03:48:44.701254   47534 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:48:44.701545   47534 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:48:44.701699   47534 main.go:141] libmachine: (multinode-355278) Calling .GetIP
	I1004 03:48:44.704131   47534 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:48:44.704523   47534 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:48:44.704556   47534 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:48:44.704702   47534 host.go:66] Checking if "multinode-355278" exists ...
	I1004 03:48:44.704996   47534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:48:44.705043   47534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:48:44.719118   47534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37081
	I1004 03:48:44.719540   47534 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:48:44.719990   47534 main.go:141] libmachine: Using API Version  1
	I1004 03:48:44.720022   47534 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:48:44.720318   47534 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:48:44.720462   47534 main.go:141] libmachine: (multinode-355278) Calling .DriverName
	I1004 03:48:44.720611   47534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:48:44.720635   47534 main.go:141] libmachine: (multinode-355278) Calling .GetSSHHostname
	I1004 03:48:44.723025   47534 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:48:44.723395   47534 main.go:141] libmachine: (multinode-355278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:da:11", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:45:59 +0000 UTC Type:0 Mac:52:54:00:33:da:11 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-355278 Clientid:01:52:54:00:33:da:11}
	I1004 03:48:44.723428   47534 main.go:141] libmachine: (multinode-355278) DBG | domain multinode-355278 has defined IP address 192.168.39.50 and MAC address 52:54:00:33:da:11 in network mk-multinode-355278
	I1004 03:48:44.723548   47534 main.go:141] libmachine: (multinode-355278) Calling .GetSSHPort
	I1004 03:48:44.723696   47534 main.go:141] libmachine: (multinode-355278) Calling .GetSSHKeyPath
	I1004 03:48:44.723843   47534 main.go:141] libmachine: (multinode-355278) Calling .GetSSHUsername
	I1004 03:48:44.723953   47534 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/multinode-355278/id_rsa Username:docker}
	I1004 03:48:44.803318   47534 ssh_runner.go:195] Run: systemctl --version
	I1004 03:48:44.809852   47534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:48:44.826398   47534 kubeconfig.go:125] found "multinode-355278" server: "https://192.168.39.50:8443"
	I1004 03:48:44.826433   47534 api_server.go:166] Checking apiserver status ...
	I1004 03:48:44.826469   47534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:48:44.842088   47534 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup
	W1004 03:48:44.851656   47534 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1004 03:48:44.851706   47534 ssh_runner.go:195] Run: ls
	I1004 03:48:44.855942   47534 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I1004 03:48:44.860165   47534 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I1004 03:48:44.860186   47534 status.go:463] multinode-355278 apiserver status = Running (err=<nil>)
	I1004 03:48:44.860194   47534 status.go:176] multinode-355278 status: &{Name:multinode-355278 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:48:44.860212   47534 status.go:174] checking status of multinode-355278-m02 ...
	I1004 03:48:44.860497   47534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:48:44.860535   47534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:48:44.875171   47534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44925
	I1004 03:48:44.875584   47534 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:48:44.876029   47534 main.go:141] libmachine: Using API Version  1
	I1004 03:48:44.876049   47534 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:48:44.876377   47534 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:48:44.876534   47534 main.go:141] libmachine: (multinode-355278-m02) Calling .GetState
	I1004 03:48:44.878082   47534 status.go:371] multinode-355278-m02 host status = "Running" (err=<nil>)
	I1004 03:48:44.878097   47534 host.go:66] Checking if "multinode-355278-m02" exists ...
	I1004 03:48:44.878411   47534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:48:44.878443   47534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:48:44.893149   47534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43653
	I1004 03:48:44.893582   47534 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:48:44.894054   47534 main.go:141] libmachine: Using API Version  1
	I1004 03:48:44.894078   47534 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:48:44.894388   47534 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:48:44.894545   47534 main.go:141] libmachine: (multinode-355278-m02) Calling .GetIP
	I1004 03:48:44.897208   47534 main.go:141] libmachine: (multinode-355278-m02) DBG | domain multinode-355278-m02 has defined MAC address 52:54:00:e8:c7:f7 in network mk-multinode-355278
	I1004 03:48:44.897608   47534 main.go:141] libmachine: (multinode-355278-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c7:f7", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:47:01 +0000 UTC Type:0 Mac:52:54:00:e8:c7:f7 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-355278-m02 Clientid:01:52:54:00:e8:c7:f7}
	I1004 03:48:44.897640   47534 main.go:141] libmachine: (multinode-355278-m02) DBG | domain multinode-355278-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:e8:c7:f7 in network mk-multinode-355278
	I1004 03:48:44.897753   47534 host.go:66] Checking if "multinode-355278-m02" exists ...
	I1004 03:48:44.898046   47534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:48:44.898086   47534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:48:44.913583   47534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46725
	I1004 03:48:44.914017   47534 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:48:44.914488   47534 main.go:141] libmachine: Using API Version  1
	I1004 03:48:44.914507   47534 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:48:44.914813   47534 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:48:44.914974   47534 main.go:141] libmachine: (multinode-355278-m02) Calling .DriverName
	I1004 03:48:44.915108   47534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:48:44.915124   47534 main.go:141] libmachine: (multinode-355278-m02) Calling .GetSSHHostname
	I1004 03:48:44.917672   47534 main.go:141] libmachine: (multinode-355278-m02) DBG | domain multinode-355278-m02 has defined MAC address 52:54:00:e8:c7:f7 in network mk-multinode-355278
	I1004 03:48:44.918024   47534 main.go:141] libmachine: (multinode-355278-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c7:f7", ip: ""} in network mk-multinode-355278: {Iface:virbr1 ExpiryTime:2024-10-04 04:47:01 +0000 UTC Type:0 Mac:52:54:00:e8:c7:f7 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-355278-m02 Clientid:01:52:54:00:e8:c7:f7}
	I1004 03:48:44.918050   47534 main.go:141] libmachine: (multinode-355278-m02) DBG | domain multinode-355278-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:e8:c7:f7 in network mk-multinode-355278
	I1004 03:48:44.918154   47534 main.go:141] libmachine: (multinode-355278-m02) Calling .GetSSHPort
	I1004 03:48:44.918319   47534 main.go:141] libmachine: (multinode-355278-m02) Calling .GetSSHKeyPath
	I1004 03:48:44.918468   47534 main.go:141] libmachine: (multinode-355278-m02) Calling .GetSSHUsername
	I1004 03:48:44.918589   47534 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19546-9647/.minikube/machines/multinode-355278-m02/id_rsa Username:docker}
	I1004 03:48:44.998967   47534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:48:45.013522   47534 status.go:176] multinode-355278-m02 status: &{Name:multinode-355278-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:48:45.013556   47534 status.go:174] checking status of multinode-355278-m03 ...
	I1004 03:48:45.013869   47534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 03:48:45.013909   47534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 03:48:45.030080   47534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36919
	I1004 03:48:45.030464   47534 main.go:141] libmachine: () Calling .GetVersion
	I1004 03:48:45.030938   47534 main.go:141] libmachine: Using API Version  1
	I1004 03:48:45.030959   47534 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 03:48:45.031251   47534 main.go:141] libmachine: () Calling .GetMachineName
	I1004 03:48:45.031433   47534 main.go:141] libmachine: (multinode-355278-m03) Calling .GetState
	I1004 03:48:45.033024   47534 status.go:371] multinode-355278-m03 host status = "Stopped" (err=<nil>)
	I1004 03:48:45.033036   47534 status.go:384] host is not running, skipping remaining checks
	I1004 03:48:45.033042   47534 status.go:176] multinode-355278-m03 status: &{Name:multinode-355278-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-355278 node start m03 -v=7 --alsologtostderr: (38.639174566s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-355278 node delete m03: (1.490022855s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (182.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-355278 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-355278 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.950883894s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-355278 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (182.48s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-355278
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-355278-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-355278-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.830475ms)

                                                
                                                
-- stdout --
	* [multinode-355278-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-355278-m02' is duplicated with machine name 'multinode-355278-m02' in profile 'multinode-355278'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-355278-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-355278-m03 --driver=kvm2  --container-runtime=crio: (42.578066469s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-355278
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-355278: exit status 80 (201.626881ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-355278 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-355278-m03 already exists in multinode-355278-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-355278-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.86s)

                                                
                                    
x
+
TestScheduledStopUnix (114.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-586252 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-586252 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.05303093s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-586252 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-586252 -n scheduled-stop-586252
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-586252 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1004 04:06:27.946486   16879 retry.go:31] will retry after 124.461µs: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
I1004 04:06:27.947632   16879 retry.go:31] will retry after 189.956µs: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
I1004 04:06:27.948777   16879 retry.go:31] will retry after 228.738µs: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
I1004 04:06:27.949937   16879 retry.go:31] will retry after 211.624µs: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
I1004 04:06:27.951103   16879 retry.go:31] will retry after 647.999µs: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
I1004 04:06:27.952299   16879 retry.go:31] will retry after 1.093234ms: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
I1004 04:06:27.953465   16879 retry.go:31] will retry after 593.337µs: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
I1004 04:06:27.954615   16879 retry.go:31] will retry after 1.706462ms: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
I1004 04:06:27.956824   16879 retry.go:31] will retry after 1.356821ms: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
I1004 04:06:27.959088   16879 retry.go:31] will retry after 4.466534ms: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
I1004 04:06:27.964364   16879 retry.go:31] will retry after 4.924211ms: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
I1004 04:06:27.969623   16879 retry.go:31] will retry after 10.479049ms: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
I1004 04:06:27.980911   16879 retry.go:31] will retry after 10.845366ms: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
I1004 04:06:27.992182   16879 retry.go:31] will retry after 20.531ms: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
I1004 04:06:28.013439   16879 retry.go:31] will retry after 37.655558ms: open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/scheduled-stop-586252/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-586252 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-586252 -n scheduled-stop-586252
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-586252
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-586252 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1004 04:07:08.994315   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:07:15.013838   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-586252
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-586252: exit status 7 (63.248724ms)

                                                
                                                
-- stdout --
	scheduled-stop-586252
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-586252 -n scheduled-stop-586252
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-586252 -n scheduled-stop-586252: exit status 7 (63.410615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-586252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-586252
--- PASS: TestScheduledStopUnix (114.69s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (143.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2954536498 start -p running-upgrade-552490 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2954536498 start -p running-upgrade-552490 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (56.325928525s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-552490 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-552490 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m23.846357468s)
helpers_test.go:175: Cleaning up "running-upgrade-552490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-552490
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-552490: (1.169964369s)
--- PASS: TestRunningBinaryUpgrade (143.98s)

                                                
                                    
x
+
TestPause/serial/Start (86.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-353264 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-353264 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m26.648251469s)
--- PASS: TestPause/serial/Start (86.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (117.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1780131700 start -p stopped-upgrade-389737 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1780131700 start -p stopped-upgrade-389737 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m12.02716734s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1780131700 -p stopped-upgrade-389737 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1780131700 -p stopped-upgrade-389737 stop: (1.485289977s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-389737 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1004 04:11:58.084939   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:12:08.993592   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-389737 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.165608808s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (117.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-316059 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-316059 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (69.035068ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-316059] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-316059 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-316059 --driver=kvm2  --container-runtime=crio: (47.211148504s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-316059 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-389737
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (47.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-316059 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-316059 --no-kubernetes --driver=kvm2  --container-runtime=crio: (46.204607286s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-316059 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-316059 status -o json: exit status 2 (266.703304ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-316059","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-316059
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-316059: (1.056878435s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (47.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-316059 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-316059 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.234991766s)
--- PASS: TestNoKubernetes/serial/Start (28.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (103.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-658545 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-658545 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m43.661420955s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (103.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-316059 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-316059 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.689697ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (32.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (13.868774181s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (18.168706033s)
--- PASS: TestNoKubernetes/serial/ProfileList (32.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-316059
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-316059: (1.423797417s)
--- PASS: TestNoKubernetes/serial/Stop (1.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-934812 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-934812 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m20.147927702s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-658545 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [61784d4d-400f-48bd-9ff5-aa2cdcc3a074] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [61784d4d-400f-48bd-9ff5-aa2cdcc3a074] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005006014s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-658545 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-316059 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-316059 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.108927ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-617497 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-617497 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (47.772763614s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-658545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-658545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.008920356s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-658545 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-934812 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7976d42e-6b0e-4b71-8d77-1df94c621ea4] Pending
helpers_test.go:344: "busybox" [7976d42e-6b0e-4b71-8d77-1df94c621ea4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7976d42e-6b0e-4b71-8d77-1df94c621ea4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003916346s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-934812 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-934812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-934812 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-617497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-617497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.12307285s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-617497 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-617497 --alsologtostderr -v=3: (10.642439234s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-617497 -n newest-cni-617497
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-617497 -n newest-cni-617497: exit status 7 (62.001891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-617497 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-617497 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1004 04:17:08.993954   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:17:15.014503   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-617497 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (37.388355626s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-617497 -n newest-cni-617497
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-617497 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-617497 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-617497 -n newest-cni-617497
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-617497 -n newest-cni-617497: exit status 2 (234.702831ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-617497 -n newest-cni-617497
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-617497 -n newest-cni-617497: exit status 2 (238.473303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-617497 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-617497 -n newest-cni-617497
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-617497 -n newest-cni-617497
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-281471 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-281471 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (58.091906641s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (645.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-658545 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-658545 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m45.533666655s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658545 -n no-preload-658545
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (645.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-281471 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5bf12a9c-f04f-41fe-803a-88cc8e2e2219] Pending
helpers_test.go:344: "busybox" [5bf12a9c-f04f-41fe-803a-88cc8e2e2219] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5bf12a9c-f04f-41fe-803a-88cc8e2e2219] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004316498s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-281471 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-281471 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-281471 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (583.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-934812 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-934812 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m42.807515138s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-934812 -n embed-certs-934812
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (583.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-420062 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-420062 --alsologtostderr -v=3: (3.313157281s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-420062 -n old-k8s-version-420062: exit status 7 (63.043179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-420062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (453.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-281471 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1004 04:22:08.994589   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:22:15.014632   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:27:08.993602   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/addons-335265/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:27:15.013926   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:28:38.087126   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/functional-994735/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-281471 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (7m33.303678328s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (453.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-281471 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-281471 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471
E1004 04:47:05.453807   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:47:05.460190   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:47:05.471584   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:47:05.492955   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471: exit status 2 (290.062057ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471
E1004 04:47:05.534358   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:47:05.615792   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.crt: no such file or directory" logger="UnhandledError"
E1004 04:47:05.777648   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471: exit status 2 (282.089341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-281471 --alsologtostderr -v=1
E1004 04:47:06.099227   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471
E1004 04:47:06.741381   16879 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-9647/.minikube/profiles/old-k8s-version-420062/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-281471 -n default-k8s-diff-port-281471
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.99s)

                                                
                                    

Test skip (35/267)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:796: skipping: crio not supported
addons_test.go:990: (dbg) Run:  out/minikube-linux-amd64 -p addons-335265 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:423: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-786799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-786799
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard